I don't get the comments trashing this. If it slightly beats or even matches Opus 4.6, it means Meta is capable of building a model competitive with the leading AI company. Sure, they spent a lot of money and will have on-going costs. But how much more work would it take to turn that into a coding agent people are willing to try (and pay for) along side their usage of a collection of agents (Claude, Codex, etc)?
Also means Meta doesn't have to pay another company to use a SATA model across all their products (including IG and WhatsApp, vr) which will matter to their balance sheet long term (despite the constant r&d spend).
prodigycorp 4 hours ago [-]
Comments trashing this are rightly correct skeptics who remember the benchmaxxing of llama 4. This model was out in the woods as early as like a couple months ago but they didn't release it because it was at gemini 2.5 pro levels.
zozbot234 4 hours ago [-]
The llama4 series was one of the earliest large MoE's to be made publically available. People just ignored it because they were focused on running smaller and denser models at the time, we should know better these days.
dilap 4 hours ago [-]
Deepseek R1 was a publically-available, MoE model that was getting a ton of attention before llama4. Llama4 didn't get much attention because it wasn't good.
prodigycorp 4 hours ago [-]
the models were objectively horrible
NitpickLawyer 4 hours ago [-]
They really weren't horrible. They were ~gpt4o, with the added benefit that you could run them on premise. Just "regular" models, non "thinking". Inefficient architecture (number of active out of total) but otherwise "decent" models. They got trashed online by bots and chinese shills (I was online that weekend when it happened, it's something to behold). Just because they were non-thinking when thinking was clearly the future doesn't make them horrible. Not SotA by any means, but still.
refulgentis 3 hours ago [-]
Wrote longer comment steel-manning this, posted it to a reply, then realized you might like to know they had a reasoning model on deck ready for release in the next 2-4 weeks.
Got shitcanned due to bad PR & Zuck God-King terraforming the org, so there'd be a year delay to next release.
Real tragi-comedy, and you have no idea how happy it makes me to see someone in the wild saying this. It sounds so bizarre to people given the conventional wisdom, but, it's what happened.
prodigycorp 4 hours ago [-]
Nah I remember how disgusted I felt trying llama 4 maverick and scout. They were both DOA.. couldn't even beat much smaller local models.
pixel_popping 53 minutes ago [-]
failing non-stop at tool calls on top of that.
refulgentis 3 hours ago [-]
I'll cosign what you said, simultaneously, yr interlocutor's point is also well-founded and it depresses me it's not better known and sounds so...off...due to conventional wisdom x God King Zuck's misunderstanding his own company and resulting overreaction.
They beat Gemini 2.5 Flash and Pro handily on my benchmark suite. (tl;dr: tool calling and agentic coding).
Llama 4 on Groq was ~GPT 4.1 on the benchmark at ~50% the cost.
They shouldn't have released it on a Saturday.
They should have spent a month with it in private prerelease, working with providers.[1]
The rushed launch and ensuing quality issues got rolled into the hypebeast narrative of "DeepSeek will take over the world"
I bet it was super fucking annoying to talk to due to LMArena maxxing.
[1] my understanding is longest heads up was single-digit days, if any. Most modellers have arrived at 2+ weeks now, there's a lot between spitting out logits and parsing and delivering a response.
alex1138 2 hours ago [-]
Your comments seem to imply the engineers made a great product but Zuck intervened so now it's shit
refulgentis 14 minutes ago [-]
I don't know how Zuck intervening could change float32s in a trained model, so I don't think I think that, but maybe I'm parsing your words incorrectly.
modeless 2 hours ago [-]
It's a decent model if the benchmarks are to be believed, but it won't be close to Opus in usefulness for programming. None of these benchmarks completely capture what makes a model useful for day-to-day coding tasks, unfortunately. It will take time for them to catch up, and Opus will keep improving in the meantime. But it's good to have more competition.
redox99 4 hours ago [-]
> If it slightly beats or even matches Opus 4.6
It doesn't though
ryeguy_24 4 hours ago [-]
Curious on why you think this. Any data points that led you to this?
howdareme 4 hours ago [-]
The benchmarks they released
johnfn 2 hours ago [-]
What do you mean? In most cases, the benchmarks show a larger number for Muse and a smaller number for Opus.
spprashant 2 hours ago [-]
In Multimodal yes, but Opus is definitely edging out in Text/Reasoning and Agentic benchmarks.
I think the general skepticism is because they are late to race, and they are releasing a Opus-4.6-equivalent model now, when Anthropic is teasing Mythos.
blazespin 16 minutes ago [-]
Because bots and trillion dollar ipos and even bigger stakes. People need to better appreciate the level of manipulation going on. Social media has an outsized impact. Bots and even people are getting paid to post and upvote/downvote narratives.
ChipopLeMoral 4 hours ago [-]
> I don't get the comments trashing this.
People like to hate on Meta regardless of anything, and regardless of whether it's justified or not. Not saying it isn't, just that it's many people's default bias.
TobTobXX 27 minutes ago [-]
> Muse Spark is a natively multimodal reasoning model with support for [...] visual chain of thought [...].
Do they mean "the chain of thought is visible to the user" (ie. not hidden like ChatGPT), or "the medium of the chain of thought is not text, but visuals" (ie. thinking in images).
I'd guess the former, since it wouldn't be economical to generate transient images, just for thinking. But I'm not sure why they'd highight that in that case. If it were the second thing, that'd be extremely interesting. The first model not to think in text.
daft_pink 5 hours ago [-]
This really reinforces the idea that the AI race and the Railroad Mania of the 19th century are very similar.
So many different companies are going to have similarly powerful ai that there will be no moat around it and it will be cheap. They will never earn their investment back.
cheriot 3 hours ago [-]
I suspect this is the real reason behind Anthropic limiting subscriptions to their own products and keeping API prices several times higher than comparable models. Applications more sticky than API users and less technical users more sticky than programmers (ie Cowork more sticky than Code).
netcan 3 hours ago [-]
Anthropic generally seem more into living within market discipline and market signals of some sort. Products with margins, even if it's sort of irrelevant considering R&D costs and capital inflow.
That said, there's nothing like the real thing.
The risk is something like the railroad bubble and the dotcom. Over-investement, circular revenue and a timeline that doesn't work.
Or, maybe it'll work out.
mirekrusin 55 minutes ago [-]
What people seem to miss is that they don't need to get the investment back from people, they will get it from machines.
AnimalMuppet 28 minutes ago [-]
Could you explain how you think that's going to work? Because to me it seems that until machines have bank accounts, there's no money for them to get.
dist-epoch 5 hours ago [-]
The moat is in the compute and the energy access.
And further down the line in chips, which is why Elon is building a fab now.
There are plenty of capable models on HuggingFace, yet I have no way of running them.
khalic 4 hours ago [-]
Give it a few years, or month. Tiny models are getting outrageously good
spprashant 3 hours ago [-]
I wonder if this is why the tech cartel is buying up all the hardware?
If the average user gets convinced they could run LLMs for cheap at home, you cannot trap users in your walled garden anymore.
mobattah 4 hours ago [-]
Exactly. We’ll see the cost of AI continue to drop.
I was saying this for years about Tesla’s FSD - they finally had to give in and drop the price to stay competitive.
dbt00 46 minutes ago [-]
FSD still sucks ass compared to Waymo.
cedws 3 hours ago [-]
That fab will never be delivered. In five years you might see the manufacturing equivalent of a person dancing in spandex.
nutjob2 3 hours ago [-]
> which is why Elon is building a fab now
At least he says he's doing that. It doesn't really make sense since you're not going to achieve an advanced node from a standing start in a practical time frame and cost.
Sounds like more Musk flavored vapor.
re-thc 3 hours ago [-]
> It doesn't really make sense since you're not going to achieve an advanced node from a standing start in a practical time frame and cost.
They already announced a partnership with Intel.
nutjob2 3 hours ago [-]
Oh the irony.
holoduke 1 hours ago [-]
Nah. Everybody is talking about ai. Everybody is using it. It's by far the most popular new tool human beings are using currently. As popular as mobile phones or spoons. And maybe as disruptive as the steam engines. AI companies are becoming the largest software companies on the planet. Everything points into that direction. Trillions of dollars are waiting in the market to be collected.
Eufrat 49 minutes ago [-]
Based on what? A lot of this is vibes and FOMO; just like any economic bubble.
There is no objective evidence of anything you’ve said. It isn’t even clear if AI has contributed positively to global economic growth. It reminds me a lot of the late 90s and the dot-com mania. Slapping a domain on a commercial would make your stock go up even if there was no substance to any of it.
The real shame is this mania drowns out serious, practical use cases because when the bubble collapses, the market will throw the baby out with the bathwater.
creddit 5 hours ago [-]
Ran some of my internal benchmarks against this and I'm very unimpressed. I don't think this moves them into the OAI v Anthropic v Gemini conversation at all.
Major analytical errors in their response to multiple of my technical questions.
creddit 5 hours ago [-]
Playing with this some more and it's actively not good. Just basic mathematical errors riddling responses. Did some basic adversarial testing where its responses are analyzed by Gemini and Gemini is finding basic math errors across every relatively (relative to Opus, Gemini or GPT can handle) simple ask I make. Yikes.
smlacy 1 hours ago [-]
Post actual results, make a blog post. Don't just say "this sucks" without tangible evidence.
Otherwise you're doomed to "sample size of one" level of relevance.
thorum 15 minutes ago [-]
I have the opposite experience: random HN/Reddit comments saying “this sucks” or “whoa this is a huge improvement” are the only benchmark that means anything. Standard benchmarks are all gamed and don’t capture the complexity of the real world.
glerk 4 hours ago [-]
Personal as in Meta gets your personal data so they can sell you more ads.
CrzyLngPwd 2 hours ago [-]
If I'm a claw, then they can send me as many ads as they like.
2pointsomone 3 hours ago [-]
[flagged]
KoolKat23 23 minutes ago [-]
Perhaps I'm wrong, but definitely seems to be SOTA. Although looking at it's ARC-AGI-2 score it's reasoning isn't very good. I suspect it's got the benefits of scale but lacks that human added element, understandable considering they claim to be building it from the ground up. This should come in time if they have a good team. In real life, I'd imagine one would worry about overfitting when using it.
(I'm not using it as I'm not agreeing to their ad terms).
hackrmn 4 hours ago [-]
The hero image on the linked page, which consists of a muted teal background with the words "Introducing Muse Spark", weighs in at 3,5MB. I don't even...
KerrickStaley 3 hours ago [-]
"Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting."
Such complaints are valid for AI model releases, that tells us that they are not using their own models to test their own release pages.
gobdovan 2 hours ago [-]
It's at least Meta-relevant. Compression Represents Intelligence Linearly (Y Huang, 2024)
yawnxyz 3 hours ago [-]
I think this speaks to the product release iself
fleabitdev 2 hours ago [-]
Good catch - looks like it's a PNG image, with an alpha channel for the rounded corners, and a subtle gradient in the background. The gradient is rendered with dithering, to prevent colour banding. The dither pattern is random, which introduces lots of noise. Since noise can't be losslessly compressed, the PNG is an enormous 6.2 bits per pixel.
While working on a web-based graphics editor, I've noticed that users upload a lot of PNG assets with this problem. I've never tracked down the cause... is there a popular raster image editor which recently switched to dithered rendering of gradients?
Overpower0416 4 hours ago [-]
lol it literally took me 2s to google search "optimize image for website" and 10s to upload and get a smaller sized image.
The result for that specific image is: 500kb. 85% decrease in size
BugsJustFindMe 3 hours ago [-]
An indistinguishable JPG is 170KB. An SVG would be 20KB.
levocardia 3 hours ago [-]
CSS with a linear gradient background would be even smaller :)
sofixa 4 hours ago [-]
You can even automatically do that on your CDN/delivery/web server layer. Or as part of your web deployment pipeline.
Overpower0416 3 hours ago [-]
Yes, but it might be a little too advance for Meta ;)
re-thc 3 hours ago [-]
But they have personal superintelligence?
kzrdude 51 minutes ago [-]
For me it's 213 kB. Did they replace it?
hungryhobbit 4 hours ago [-]
Someday our robot overlords will be intelligent enough to ... optimize images!
(But today is not that day.)
ruszki 1 hours ago [-]
The proper optimization in this case is to not use images at all.
zfol_510 4 hours ago [-]
And it doesn't even look high-res.
Invictus0 4 hours ago [-]
complaining about sand on the beach
hackrmn 4 hours ago [-]
I am simply offended. By Meta's lack of sensibilities (or ability) towards use of images on the Web while touting their new flavour of artificial intelligence as a product.
Invictus0 3 hours ago [-]
old man shouts at cloud
hackrmn 3 hours ago [-]
more like old man shouts at someone else's computer
fooqux 4 hours ago [-]
It's not sand on the beach, it's garbage on the beach.
yalogin 3 hours ago [-]
Meta is in a weird spot. They caught up late to the game and instead of releasing llama as a chat bot they open sourced it, precisely because they lost the mind share. They thought chatbot is not their product and I am sure they are regretting it now. Mark is obsessed with becoming the android of something and he poured billions into the metaverse thinking he is first and failed. He then open sourced llama and wanted to be the android of llms. He ended up enabling groq but it didn’t benefit meta directly at all. They have no revenue or mind share path from llms but continue to pour billions into it. The only 1-1 mapping is with the glasses but that is a tough fit for the company given they are extremely allergic to privqcy and security.
> He then open sourced llama and wanted to be the android of llms.
Well the original llama did kick off the era of open source LLMs. Most original open source LLMs were based on the llama architecture. And look where we are now OSS modles are very close to frontier.
It may not have benefitted Meta but it commoditizatised LLMs.
solarkraft 59 minutes ago [-]
Hell, most of us are still using llama.cpp for inference in some form
gardnr 2 hours ago [-]
The llama weights were leaked. It open sourced itself.
You are right though. Meta could have been in lockstep releasing ChatGPT features into some chat bot on Facebook.com but instead it seemed like their FAIR arm was hell bent on commoditising this stuff by publishing their research models before the Chinese companies took the lead in that.
It’s hard for me to be mad at FAIR even though I general disagree with the outcomes that Meta produce for their users.
3 hours ago [-]
granzymes 1 hours ago [-]
Comes impressively close to GPT 5.4 / Gemini 3.1 Pro / Opus 4.6! Mostly behind OpenAI on coding/agentic benchmarks, behind Google on text reasoning, behind Anthropic on Humanity's Last Exam with tools (surprisingly the only benchmark where Anthropic leads currently).
Meta hasn’t fully caught up, but they came close and I think can solidly claim to be a frontier lab again. I’d call it a 3.5 horse race right now, and hopefully their next model improves. More model competition is good!
Poor Grok 4.2 should probably be dropped from the table.
fancy_pantser 22 minutes ago [-]
It's looking rather low on reasoning and long-range problems with the approach described. For example, even with 16 agents and compaction, the HLE score is significantly below Anthropic's Mythos. Like you, I can see the release as a net Good Thing, but apples-to-apples for each org's latest models do have Meta holding steady in the middle pack.
zozbot234 15 minutes ago [-]
HLE encompasses very hard problems where the larger pretraining of Mythos probably matters quite a bit. I'm not saying that Mythos does not show some amount of genuine improvement compared to the latest Opus, just that if you're going to compare models you should at least make sure that the overall test-time workload is in the same ballpark given how high it is for Mythos.
moab 6 hours ago [-]
"Muse Spark is available now, and Contemplating mode will be rolling out gradually in meta.ai."
How does one get their hands on these models? They are not open-source, right? I go to meta.ai, but it's just a chat interface---no equivalent to codex or claud code? Can you use this through OpenCode? Is meta charging for model access, or is the gathering of chat data a sufficiently large tithe?
meetpateltech 5 hours ago [-]
"It will be available in private preview via API to select partners, and we hope to open-source future versions of the model."
I can't think of any "select partners" that would want to use this non-SOTA model. Just put it on OpenRouter.
giancarlostoro 5 hours ago [-]
If Microsoft is a select partner, maybe they could shove it into Copilot for VS or something, but yeah, I'm wondering the same, maybe Apple could be one of their partners too?
mark_l_watson 1 hours ago [-]
That would be my question also. I like it when companies have easy to sign up for, pay as you go models. Being able to buy $5 worth of tokens and get an API key - in less than a few minutes - is ideal.
monkeydust 6 hours ago [-]
TBD it seems. So far the only explained usage pattern is through a Meta product (Whatsapp, Facebook, Instagram).
moab 5 hours ago [-]
So to verify their claims and see how strong these models are, the answer is "believe us"?
Note: I'm expressing some skepticism here largely due to how recent rollouts from Meta flopped. Sincerely hoping that they do better this time around!
nemomarx 5 hours ago [-]
I assume the answer is try it out in the chat mode? You could run your usual benches through that right
pstuart 5 hours ago [-]
I appreciate that they build this stuff for their own benefit, but I don't want to feed even more of my private info. Hopefully the models will become public or lead to equivalent models from other sources.
bguberfain 4 hours ago [-]
We all know it... but I think they were very bold in this warning about using your private messages to train public models.
_Your messages with AIs will be used to improve AI at Meta. Don't share information, including sensitive topics, about others or yourself that you don't want the AI to retain and use_
discopicante 4 hours ago [-]
meta doesn't exactly instill confidence on using personal data responsibly. hard pass
throwaw12 5 hours ago [-]
How is that Meta spent so much money for talent and hardware, but the model barely matches Opus 4.6?
Especially, looking at these numbers after Claude Mythos, feels like either Anthropic has some secret sauce, or everyone else is dumber compared to the talent Anthropic has
strulovich 5 hours ago [-]
Meta did a bunch of mistakes, and look like Zuckerberg spent a lot of money on talent and made big swings to change it (that happened about a year ago)
I think it’s unrealistic to expect them to come back from that pit to the top in one year, but I wouldn’t rule them out getting there with more time. That’s a possible future. They have the money and Zuckerberg’s drive at the helm. It can go a long way.
solenoid0937 5 hours ago [-]
It's benchmaxxed.
If they actually matched Opus 4.6 on such a short timeline, it would have been mighty impressive. (Keep in mind this is a new lab and they are prohibited from doing distills.)
throwaw12 5 hours ago [-]
how do you know it's benchmaxxed?
solenoid0937 5 hours ago [-]
Friends at Meta with access to the model + personal experience at Meta.
Meta's performance process is essentially "show good numbers or you're out." So guess what people do when they don't have good numbers? They fudge them. Happens all across the company.
luma 4 hours ago [-]
For one, they aren't using the latest version of many of the benchmarks. eg, ARC-AGI 2 and not 3, etc.
prodigycorp 5 hours ago [-]
meta's benchmaxing tendencies are well known. llama4 was mega benchmaxxed, there's nothing that suggests to me that meta's culture has changed.
spindump8930 3 hours ago [-]
Re: changes, there's been enormous turnover in AI organizations, and in theory this one was developed by a "new" org. Whether that means less or more benchmaxxing is anyone's guess.
coffeebeqn 5 hours ago [-]
Matching Opus 4.6 would be pretty good? It’s the SOTA actually available model
reissbaker 4 hours ago [-]
Muse Spark doesn't even match GLM-5.1 on most benchmarks. And GLM is open source!
impulser_ 5 hours ago [-]
It's not even on par with Sonnet. It's on par with open source models and it not even open source and sit behind a private preview API.
Might as well not release anything.
CuriouslyC 2 hours ago [-]
Anthropic has just been focused on coding/terminal work longer mostly, and their PRO tier model is coding focused, unlike the GPT and Gemini pro tier models which have been optimized for science.
Their whole "training the LLM to be a person" technique probably contributes to its pleasant conversational behavior, and making its refusals less annoying (GPT 5.2+ got obnoxiously aligned), and also a bit to its greater autonomy.
Overall they don't have any real moat, but they are more focused than their competition (and their marketing team is slaying).
zozbot234 1 hours ago [-]
Autonomy for agentic workflows has nothing to do with "replying more like a person", you have to refine the model for it quite specifically. All the large players are trying to do that, it's not really specific to Anthropic. It may be true however that their higher focus on a "Constitutional AI"/RLAIF approach makes it a bit easier to align the model to desirable outcomes when acting agentically.
wotsdat 5 hours ago [-]
[dead]
username223 5 hours ago [-]
Facebook is working with the talent that can’t find a job at some other company. It doesn’t surprise me they ship mediocrity.
zozbot234 5 hours ago [-]
> has some secret sauce
Yup, it's called test-time compute. Mythos is described as plenty slower than Opus, enough to seriously annoy users trying to use it for quick-feedback-loop agentic work. It is most properly compared with GPT Pro, Gemini DeepThink or this latest model's "Contemplating" mode. Otherwise you're just not comparing like for like.
throwaw12 5 hours ago [-]
> it's called test-time compute.
Why can't others easily replicate it?
coder68 5 hours ago [-]
I have not delved into the theory yet but it seems that the smaller open-source models do this already to an extent. They have less parameters, but spend much more time/tokens reasoning, as a way to close the performance gap. If you look at "tokens per problem" on https://swe-rebench.com/ it seems to be the case at least.
ddp26 5 hours ago [-]
The second paragraph starts "Muse Spark is the first step on our scaling ladder and the first product of a ground-up overhaul of our AI efforts. To support further scaling, we are making strategic investments..."
This article is about Meta, not about the user. Who signs off on these? Is the intended audience other people at Meta, not the user?
tjkrusinski 5 hours ago [-]
The article is published primarily to signal to the market that Meta is serious in its efforts to compete in building frontier ai models.
They want to 1) attract talent, 2) tell wall street they can play in this space as well, 3) help employees feel the company is moving in the right direction.
A frontier LLM doesn't apply to their core consumer products.
Lihh27 5 hours ago [-]
the blog is the product. investor deck posted as a tech launch
conradkay 5 hours ago [-]
Stock up 9% today, very pleasant for Zuck if you do the math on his net worth :)
hungryhobbit 4 hours ago [-]
I mean, kinda? It's not like Zuck is selling his stock tomorrow, so daily fluctuations in stock price don't really affect him.
hvass 3 hours ago [-]
Genuine question: Why release this the day after Mythos? It does not appear SOTA (just based on benchmarks). OpenAI will likely release Spud tomorrow.
paxys 30 minutes ago [-]
Mythos is a news article. This is an actual model you can use.
eranation 3 hours ago [-]
That's a really good question, my sarcastic mind thinks that Anthropic rushed the Mythos announcement of fears of Meta stealing their thunder... (I guess someone leaked that, a LOT of anthropic folks are ex meta... so, you know)
Just a speculation, I have no real knowledge about it.
MattRix 39 minutes ago [-]
I think Anthropic did the mythos announcement to undercut OpenAI’s upcoming next model announcement, not Meta’s.
MattRix 38 minutes ago [-]
Why not? Not everything has to be SOTA to be interesting.
pixel_popping 59 minutes ago [-]
Meta back in the commercial race is actually exciting, despite not being a fan of the company.
gallerdude 5 hours ago [-]
This would have been an amazing release 6 months ago. But the industry moves so fast, this is a trite release. Maybe it’s best for Meta to sell their superintelligence division. I don’t think Zuck’s vision is particularly compelling.
gordonhart 5 hours ago [-]
A new model comparable (ish) to the Claude/Gemini/GPT flagships is a big deal for the industry and for Meta even if it doesn't set the new frontier.
gallerdude 5 hours ago [-]
I’m not sure. If it was open source, certainly. But 4th place doesn’t really matter if you have nothing different to add.
lairv 5 hours ago [-]
If the model is truly on par with Opus 4.6/Gemini 3.1/GPT 5.4 (beyond benchmarks) this still puts MSL in the frontier lab category, which is no small feat given that they pretty much rebooted last year
Many labs aren't able to keep up with the frontier, xAI, Mistral
datadrivenangel 5 hours ago [-]
Fourth place means you're not reliant on any of the external providers for internal AI use, which is important for organizational health and negotiating with those other providers.
rubyn00bie 4 hours ago [-]
I’m not sure it’s useful for negotiating, the capex to build it was surely orders of magnitude more than it would cost to just use one of the other frontier models.
It’s like someone negotiating by saying, “I’ll waste even MORE money to build something worse if you don’t give me a deal.”
I’m not discounting there may be other advantages to doing it. I just don’t think negotiating is one.
blahblaher 5 hours ago [-]
Why would you use this instead of the other more proven models? Unless it's significantly cheaper. The general population mostly wants it free, and the more professional users are willing to pay for good/better responses.
NitpickLawyer 4 hours ago [-]
You wouldn't use this as an API. You would "use" this inside the meta properties. Have a shop on fb marketplace? Now you have copy, images, support, chat, translations, erp, esp, fps and all the other acronyms :) and so on for your mom and pop shop @200$/mo. Probably worse than say claude/gemini but it's right there, one button away. "Click here to upgrade to AI++" or something.
gallerdude 4 hours ago [-]
But rolling your own can’t be that much cheaper than buying it from a leading lab. Especially when you consider the amount of spending on datacenters.
hnav 4 hours ago [-]
leading labs are going to be tightening the screws. Otherwise why not just run the entire company on a public cloud?
gordonhart 4 hours ago [-]
I won't use it, but I'm excited to see it for the same reason why I'm excited to see a near-frontier open-source release: more competition pushes prices down and reduces monopoly/cartel risk. I won't use Muse or Grok or GLM at this point but they're good for the ecosystem.
zozbot234 5 hours ago [-]
Their new Contemplating mode gives this model a Deep Research ability (akin to existing models from GPT and Gemini) that might make it quite comparable to the just-announced Mythos.
solenoid0937 5 hours ago [-]
Mythos is a much bigger pre train, Contemplating is not the same thing.
zozbot234 5 hours ago [-]
> Mythos is a much bigger pre train
Do we have data to substantiate that claim?
solenoid0937 5 hours ago [-]
It's pretty common knowledge. Spud is the only other PT comparable with Mythos.
Both Spud and Mythos can also scale via inference time compute.
Meta simply did not have enough compute online, long enough ago, to have a similar PT.
temp_praneshp 4 hours ago [-]
> might make it quite comparable to the just-announced Mythos
Do we have data to substantiate that claim?
dgellow 5 hours ago [-]
I never understood why meta decided to join the race. They don’t sell compute like Google or Microsoft. Why not let others do the hard work and integrate their LLMs in your systems if needed?
I assume it’s because they have Instagram, Facebook, WhatsApp, Thread data and feel they should be the ones using them for training, but it’s really not obvious how having a frontier AI lab benefits their business
observationist 5 hours ago [-]
Adtech Money. They've got GPUs, they've got the infrastructure, and they've got the advertisement platform, and the point is getting AI that can exploit the adtech and create a flywheel effect, maximizing return from the data they collect from Insta, WhatsApp, Facebook, etc.
It's not just about LLMs, it's about being able to model consumers and markets and psychology and so on. Meta is also big in the manipulation side of things, any sort of cynical technological exploitation of humans you can imagine but that is technically legal, they're doing it for profit.
4 hours ago [-]
eldenring 4 hours ago [-]
Because there's a realistic chance this is the only important software technology moving forward, and commoditizes Metas's entire business which is software.
dgellow 3 hours ago [-]
Meta’s business is human attention, human connections, and all derived data. They can use AIs for their systems, but the question is why do they feel the need to spend billions on training and running their own frontier model
bachmeier 4 hours ago [-]
> I never understood why meta decided to join the race.
I can think of at least two reasons. Price and customizability. If they train their own models on their own data, they potentially have a better model at a better price, and they're not at the mercy of Anthropic's decisions when they decide to raise prices. Additionally, if you use someone else's model, you use it the way they create it and permit you to use it. In a couple years, who has any idea how these models are used. Arguably, a company the size of Meta should be in control of their AI models.
vinni2 4 hours ago [-]
From what I heard Meta is spending hundreds of millions each month in Claude credits for developers. So that’s a huge saving if they have own models that match Opus.
spindump8930 3 hours ago [-]
Spending tons of money on Claude and the recent token benchmarks came WELL after Meta's huge investments in compute infrastructure for AI as well as the long history of language model development inside science divisions at the company.
xnx 5 hours ago [-]
Zuck is trying to convince himself he's good, and not just lucky.
chermi 4 hours ago [-]
You basically have to be involved if you're meta. Even if there's only 5% chance this AI stuff is as disruptive as the labs claim it is, you can't afford to miss out. Even if you're lagging frontier, you must develop the competency internally. Otherwise you ignored a 5% chance of total annihilation, probably even exposing you to shareholder lawsuits.
SoftTalker 4 hours ago [-]
LLMs/Chat-based systems will reach a point where Facebook, WhatsApp, Threads, Instagram, etc. are all unnecessary. The idea of opening a browser or a specific app to do a thing will seem antiquated. You can do it all with your chat-based agent. Meta wants to be part of that.
operatingthetan 4 hours ago [-]
I don't think everyone only wants to talk to machines going forward...?
SoftTalker 3 hours ago [-]
I don't want to do it now. But that seems to be where we are being headed, like lemmings running for the cliff.
dgellow 3 hours ago [-]
Sure but they have the platforms, they don’t need their own frontier models for that
SoftTalker 3 hours ago [-]
The platforms will be irrelevant at some point. "Posting to Facebook" won't be a thing.
KaiserPro 4 hours ago [-]
A few things:
1) meta was doing this at scale before openAI
2) decent ML is critical to catagorising content at scale, the more accurate and fast the category, the finer the recommendations can be (ie instead of woman, outside as a tag for a video, woman, age, hair colour, location, subjects in view, main subject of video, video style) doing that as fast as possible with as little energy as possible is mission critical
3) The llama leak basically evaporated the moat around openAI who _could_ have become a competitor
4) for the AR stuff, all of these models (and visual models) are required to make the platform work. They also need complete ownership so that it can be distilled to make it run on tiny hardware
5) dick swinging
6) they genuinely want to become a industrial behemoth, so robots, hardware, etc are now all in scope.
bee_rider 5 hours ago [-]
I think they just want to be a winner in the “next thing.” They hit social networking, but missed mobile operating systems and didn’t compellingly win at social media. Eventually an ambitious person with a bazillion dollars wants a clear win, right?
storus 4 hours ago [-]
Only thanks to Meta we have competitive local LLMs. Without LLama nothing decent would have been released. Commoditize your complements in action.
yoz-y 5 hours ago [-]
AI NPCs to fill in the empty Metaverse?
4 hours ago [-]
gallerdude 5 hours ago [-]
I’m sure there’s more to it than this, but it feels like Zuck has pet interests like VR and now AI.
alex1138 5 hours ago [-]
But no account support, that's boring
Or any quality control (people missing posts)
Or banning the people who should be banned while leaving everyone else alone
First and most importantly is the fact they have a lot of very valuable data they wouldn't want to siphon to a competitor. This data is a key strategic asset in the space where they do business.
Secondly though, I think it has to do with the fact Meta is big enough to worry about vertical integration and full control of their business.
The whole reason they've been trying to make AR/VR happen for over a decade now is the assumption of a worst case and best case scenario. The worst case is Apple and Google wants them gone. This isn't as far fetched as it seems, Google has historically been Meta's biggest competitor and even tried to release its own social network back when Meta was threatening them. If either pulls Meta apps from their respective stores, it'd be an immense blow to Meta; their whole trillion-dollar business depends on competitor's platforms.
Meta tried making inroads into the phone business but failed; it is a very crowded market after all. So they changed their strategy. Instead of playing catch-up, they'd invent "the next iPhone" and be the first to a brand new market. This is the best case scenario; they invent a new platform where they can be dominant from day 1 and stop depending on competitor's hardware, not only removing that risk factor for them, but also unlocking a new market they can control.
AI ties into all this because it appears to be key for this next platform to happen. You will communicate with these smart glasses via voice, hand gestures, or subtle movements that a model will have to interpret. The features that could make them stand out as more than just a screen on your face are all AI related; object detection, world understanding, context awareness, etc. If all this were done via a 3rd party Meta would effectively be back on square one: a competitor could easily yank away its model access, or sell it to a competitor. Meta would be again at the mercy of others.
Compared to other big-tech players, I think it's easy to see how Meta is in a riskier position. There's little Google or Microsoft can do to kill the iPhone. There's little Apple or Google can do to kill Amazon's online store. There's little Amazon or Apple can do to kill Microsoft's business deals. Google and Meta are primarily in the business of capturing people's data, attention, and selling ads, and both Google and Apple could do quite some damage to Meta. Beyond expanding it, it's important for them to invest in ways to protect their money-printing machine.
chairmansteve 5 hours ago [-]
Pumps up the stock price.
addandsubtract 4 hours ago [-]
To download all those torrents, obviously.
swyx 5 hours ago [-]
you dont understand why zuck, who paid $1B for instagram when they had no revenue and 7 employees because he is paranoid about platform shifts, decided to join the race for (what is seeming highly possibly) the biggest platform shift in human history?
oceansky 5 hours ago [-]
He also tried and failed to buy Snapchat, and then copied their feature on all their big products: Instagram, Facebook and even WhatsApp.
prodigycorp 4 hours ago [-]
The way you put it, I understand it less. lol
awestroke 5 hours ago [-]
Because Zuck has chronic FOMO, he's said as much himself
zeroonetwothree 5 hours ago [-]
But then how will Zuck win the billionaire dick measuring contest?
throwaw12 5 hours ago [-]
> I don’t think Zuck’s vision is particularly compelling.
But he has to do it anyways, otherwise Meta can be disrupted easily.
Google, Apple has hardware, distribution channels for their products
Amazon has the marketplace and cloud
Microsoft has enterprise and cloud
Meta is always looking for ways to stay afloat
xnx 5 hours ago [-]
Meta has 3.5 billion daily active users
throwaw12 5 hours ago [-]
and has competitors like: TikTok, SnapChat, YouTube, Netflix, X, HBO, Amazon Prime, all fighting for the attention time.
They are worried something like Sora can disrupt them quickly
GalaxyNova 3 hours ago [-]
It is unfortunate that they decided to stop doing open-weight releases.
What could have been interesting has been reduced to simply another subpar LLM release.
gritspants 4 hours ago [-]
I would like someone to tell me how stupid I am. If I were Meta/Zuck I'd open source a great model the moment my company developed it. This just looks like a pitch to investors, otherwise.
jamiequint 4 hours ago [-]
"This just looks like a pitch to investors"
The goal of public companies is generally to generate profit for their investors.
samrus 4 hours ago [-]
Im beginning to think thats the mantra we'll keep reciting as this whole country slowly falls apart
kzrdude 37 minutes ago [-]
pitch to investors sounds like working for the opposite goal though - to convince investors to give more money to the company.
gritspants 3 hours ago [-]
Thank you for telling me how stupid I am.
SoftTalker 4 hours ago [-]
This is also the goal of private companies.
edwcross 4 hours ago [-]
What is the "BioTIER-refuse" thing mentioned in the "Bioweapons Refusal" graph?
I Googled it and found absolutely nothing.
Well, to be honest, I got 100% of websites containing the French word "boîtier" (box) with a typo.
Even on Google Scholar, the closest match is "BioTiER (Biological Training in Education and Research) Scholars Program", which is at least 10 years old and has nothing to do with that.
Is that an AI-generated image with an AI-generated name that has no physical existence?
Question: since they've rebooted their approach to AI... have they given up on open models? There's no mention of open source or open weights or access to the models beyond their hosted services.
thegeomaster 5 hours ago [-]
Alexandr Wang on Twitter [0] mentioned open source plans:
"this is step one. bigger models are already in development with infrastructure scaling to match. private api preview open to select partners today, with plans to open-source future versions. incredibly proud of the MSL team. excited for what’s to come!"
Finding a little bit tricky to evaluate because the harness is unfortunately very, very bad (e.g. search is awful). Can't wait to try this in some real external services where we can see how it performs for real.
Definitely getting ordinary high-quality results, overall. But hard to test agentic behavior and hard to test prose quality, even, when just working off of the default chat interface.
One thing that stands out is that _for_ the quality it feels very, very fast. Perhaps it's just only very lightly loaded right now, but irrespective it's lovely to feel.
I'm quite impressed with the tone overall. It definitely feels much more like Opus than it does, like, GPT or Grok in the sense that the style is conversational, natural and enjoyable.
So this is why Anthropic rushed the weirdest "pre-responsible-disclosure-totally-not-for-marketing" announcement yesterday? To make sure Spark doesn't steal their thunder? (Spark beats Opus 4.6 on some benchmarks...). Or did I become a bitter cynical old man.
levocardia 2 hours ago [-]
Anthropic had their mythos post (and model) basically ready a few weeks ago, as evidenced by the blog content leaks. Also I highly doubt they just threw together a 250-page PDF model card in a "rush."
dbgrman 1 hours ago [-]
Last i checked with friends at meta they are pretty deeply invested in using claude for coding etc. anthropic has nothing to be scared of at MSL.
If spark beats opus 4.6, why is meta wasting money on opus internally?
hnav 4 hours ago [-]
It's giving "OpenAI says its new model GPT-2 is too dangerous to release (2019)"
reducesuffering 3 hours ago [-]
[because it would start an arms race]. The very arms race we're in... They were right
Yes, it's far more certain that meta released this, which is less convincing on evals, as a result of the mythos previews.
eranation 3 hours ago [-]
Sarcasm aside, tried it (with instant mode), it's an impressive model.
It nailed all the ChatGPT meme gotchas (walk to the carwash, Alice 50 brothers, upside down cup, R's in strawberry, which number is bigger, 9.11 or 9.9?)
I guess all that money poaching OpenAI / Anthropic talent went somewhere...
Now, would I use "Meta Muse Code" or "Muse CoWork" if I have to have a facebook account to all of my developers? Maybe not.
Would I use it via an API key? I might, depends on the pricing!
turtlesdown11 3 hours ago [-]
so since they hard programmed all of the meme gotchas, they built a good model?
nh23423fefe 3 hours ago [-]
lazy snark < playing around with it
zurfer 5 hours ago [-]
> Muse Spark is available today at meta.ai and the Meta AI app. We’re opening a private API preview to select users.
m4r1k 5 hours ago [-]
So no Open-weight .. why one would choose Muse Spark instead of Anthropic, OpenAI, or Google models all featuring from good to amazing harness?
redlewel 1 hours ago [-]
I am already somewhat concerned with companies like Anthropic and especially OpenAI having personal data via chats.
Typing that sort of information into a Meta AI product feels completely irresponsible. You could make some very sophisticated ads/psyop attacks with data from daily ai chats.
I doubt its better than Opus and even if it was its not worth the privacy concerns.
nharada 4 hours ago [-]
Saying nothing about the actual performance of this model, it does strike me how .... minimal(?) this announcement is. Their safety section is like 2 paragraphs about bioweapons. Go look at the reports for OpenAI and Anthropic's model releases. It's like 50+ pages of tests, examples, reports, and benchmarks across a bunch of safety and wellfare metrics.
If Meta wants to be seen as a cutting edge massive lab they need to come across as one instead of looking like a school project version of a frontier model.
WarmWash 4 hours ago [-]
Rumor on the ground is that they expected a much stronger model than this one.
levocardia 2 hours ago [-]
Funny contrast with Anthropic. Ant does a "hero run," gets a model much more powerful than they expect. Meta does a hero run, gets a model much more mediocre than they expect. Read into this what you will, I guess?
htrp 3 hours ago [-]
llama4 behemoth problems?
nubg 3 hours ago [-]
Can you elaborate?
WarmWash 3 hours ago [-]
That's it. It's just a rumor. A model, which I don't even know of it's this one specifically, fell short of expectations. This rumor came up around mid March.
Personal Superintelligence made me think this was an open-source model being released and I was excited. Then I continued reading and I'll just wait until the model comes out.
dbgrman 1 hours ago [-]
I wonder if Zuck will ever internalize that the words ‘personal’ and ‘meta’ will not be taken seriously together for another decade (if they don’t make another gaff).
gardnr 2 hours ago [-]
I was really excited until I realised that “personal” meant “owned by meta“.
I’m trying to decide is I find the doublespeak a bit offensive or not.
syntaxing 3 hours ago [-]
Kinda crazy, it really felt like Meta had the lead in LLMs, especially during the early LLaMa days. What happened for them to fall so far behind? I don’t get how LLaMa 4 was such a big train wreck and they couldn’t correct the course like Google.
plombe 3 hours ago [-]
Looks like a lightweight article.
But memory usage went from 316MB -> 502 MB when I hit refresh.
Not sure why? Any one have any ideas? Why does it need half a gig of ram in the first place?
khalic 5 hours ago [-]
Oh good, if they built a lab, I’m sure they took the time the precisely define what they mean by super intelligence? Right? …
52-6F-62 5 hours ago [-]
If this is super intelligence, then it follows we must all be super-duper intelligence.
gardnr 2 hours ago [-]
It’s personal…
oliver236 5 hours ago [-]
so glad its beating all the others on bioweapons refusal. this is what i most wanted out of the latest SOTA model
wmf 5 hours ago [-]
Zuck has a lot more experience being summoned before Congress than you.
5 hours ago [-]
dbgrman 1 hours ago [-]
Litmus test: what % of meta engineers are using muse vs Claude code? Last i heard it was mostly claude code. Tell you everything you need to know about how serious these benchmarks are.
upmind 1 hours ago [-]
Sure it's not as good as Claude right now but for their first model in years it's certainly not bad. I hope they continue to develop models, having another competitor in the space would be nice.
Artgor 5 hours ago [-]
I'm cautiously waiting for the feedback from the first users.
Meta has produced a lot of great models (LLama), maybe this is a comeback... but I'm cautious, as the jump in the quality is almost too high.
Also, I think people aren't used that using such models requires meta.ai or meta ai app.
solenoid0937 5 hours ago [-]
My Meta friends say it's benchmaxxed af
loeg 5 hours ago [-]
We used to call this "overfitting," but I suppose everything has to be maxxed now. Fitmaxxed?
conradkay 5 hours ago [-]
It doesn't seem benchmaxxed, ARC AGI 2 score is quite bad (42.5%, GPT 5.4 is 76.1%) and coding is okay. But maybe this is the best Meta can do even benchmaxxing
The impressive part is multimodality, very plausible since there's less focus there by other labs (especially Anthropic)
dbgrman 1 hours ago [-]
Given llama 4 mucked up benchmark numbers, I’d take spark announcement with a many grains of salt.
visioninmyblood 5 hours ago [-]
https://meta.ai/ this is where you can try it seems like the API is not publicly accessable yet. I feel they are very late to the game and do not show value to customers over other models.
p_stuart82 5 hours ago [-]
late isn't the problem. private preview api and no reason to switch. that's just another hosted model
LZ_Khan 1 hours ago [-]
One word: distillation
4 hours ago [-]
santiagobasulto 5 hours ago [-]
This looks like a very interesting model and very promising, especially after llama lost so much ground recently. I hope they release the weights
napolux 4 hours ago [-]
I can't login. It sends me always the same code and it's not correct for them
chrsw 5 hours ago [-]
So Meta is not releasing open source models anymore?
dhruvyads 3 hours ago [-]
Sad to see it's not going to be open source.
BugsJustFindMe 3 hours ago [-]
I'm struck by all these independent announcements saying "look at our new model that we only spent $N Billion in acquisitions and hardware time to build and operate that's just like those other ones but this one is ours." Because if any of these companies would simply pool resources and work together, and if the government actively participated in providing funds, they'd be able to accelerate AI so much faster. It all feels incredibly wasteful. But I guess that's communism or something.
victorbjorklund 3 hours ago [-]
Competition often foster innovation. Why are they innovating so fast and spending so much money? Because they don’t wanna get behind. If there was no competition at all then there would be much less reason to innovate and spend resources.
BugsJustFindMe 2 hours ago [-]
> Competition often foster innovation.
So does cooperation in any framework that values public good over pure obedience to an inherently-abusive late stage capitalism. I know that's passé in a world where the US government no longer believes in funding science, and yet.
Competition is also inherently wasteful. And if you're talking about wasting a few K or a few Mil here or there, fine, whatever. But here we're talking about waste on the order of trillions of dollars at the end of the day.
vinni2 4 hours ago [-]
I have to create meta account to access. No thanks.
nubg 3 hours ago [-]
NOTHING about this is personal! No weights were released!
Their product could literally teleport gold into my hands and I wouldn't use it.
Kuyawa 4 hours ago [-]
> Meta AI isn't available yet in your country
Not my loss, will keep using DeepSeek then. Wake me up when my country is no longer in the wrong/right side of history.
rvz 5 hours ago [-]
Until you actually try the model itself, assume any benchmark presented to you as being part of the marketing material of the model, as it is not independently verified and completely biased.
The same is true with any other model, unless otherwise stated.
In the next few days, we'll see who Meta has paid to promote this model on social media.
OsrsNeedsf2P 5 hours ago [-]
The only benchmark they show against SOTA models is in bioweapons refusal.
Edit: nvm I can't read, regular benchmarks against SOTA are there
sidcool 4 hours ago [-]
Meta.ai has muse spark
ge96 4 hours ago [-]
funny how websites do that thing where it looks like you can use the product but soon as you hit enter, nope login first
federicodeponte 2 hours ago [-]
[dead]
alyxya 5 hours ago [-]
[dead]
aivillage_team 3 hours ago [-]
[dead]
1970-01-01 4 hours ago [-]
I can remember when AOL was an unstoppable giant. Except it wasn't. People eventually realized they could get a better, cheaper, faster experience with ISPs and search engines. The same path is unfolding before Meta. People have much better options, and plethora of Meta users will slowly leave until the big moat is drained. Zuck, go retire to your NZ bunker before Meta is forced to merge with another media company.
ehutch79 5 hours ago [-]
How's the metaverse doing? It was the next big thing and how we're all going to be working inside it in... was it like 3 months ago?
Maybe they need to mine more libra coin first? or is it diem now? is that even still part of meta?
I'm sure this new AI is super intelligent and super awesome and will be writing all the code, making all the blog posts, and generating all our youtube shorts in 6 months.
serf 5 hours ago [-]
what's with the negativity?
yeah, the metaverse got abandoned. Also: Meta was the only one to try the concept for the past X-umpteen years even though everyone in the industry ga-gas over virtual reality worlds and workplaces at every opportunity. It's literally Meta and Linden Labs (which has been on life support for 10+ years.)
The alternative is : no one does it and nothing gets abandoned, which the industry has shown itself to be exceedingly good at w.r.t VR for the past 40+ years.
To be clear: I have no faith in meta as a company; my problem lies in kicking an entity because they attempted something different.. I don't think that's productive, and it produces stuff like the past AI winters because groups get afraid of touching experimental concepts ever again lest they incur the wrath of the shareholder.
ehutch79 4 hours ago [-]
It's not the failure here or there, it's a pattern. It's not even the failing, it's the excessive hype cycle.
We keep seeing things being overhyped, with not much thought behind it. Meta is particularly bad about it. They changed their name for the hype of their VR product, when VR was still niche and had a long way to go, and still does. They couldn't even figure out legs for launch.
Now they have a 'superintellegence'? Yeah, that sounds like just the latest in a line of bullshit. Why would this be different.
sva_ 5 hours ago [-]
> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
Got shitcanned due to bad PR & Zuck God-King terraforming the org, so there'd be a year delay to next release.
Real tragi-comedy, and you have no idea how happy it makes me to see someone in the wild saying this. It sounds so bizarre to people given the conventional wisdom, but, it's what happened.
They beat Gemini 2.5 Flash and Pro handily on my benchmark suite. (tl;dr: tool calling and agentic coding).
Llama 4 on Groq was ~GPT 4.1 on the benchmark at ~50% the cost.
They shouldn't have released it on a Saturday.
They should have spent a month with it in private prerelease, working with providers.[1]
The rushed launch and ensuing quality issues got rolled into the hypebeast narrative of "DeepSeek will take over the world"
I bet it was super fucking annoying to talk to due to LMArena maxxing.
[1] my understanding is longest heads up was single-digit days, if any. Most modellers have arrived at 2+ weeks now, there's a lot between spitting out logits and parsing and delivering a response.
It doesn't though
I think the general skepticism is because they are late to race, and they are releasing a Opus-4.6-equivalent model now, when Anthropic is teasing Mythos.
People like to hate on Meta regardless of anything, and regardless of whether it's justified or not. Not saying it isn't, just that it's many people's default bias.
Do they mean "the chain of thought is visible to the user" (ie. not hidden like ChatGPT), or "the medium of the chain of thought is not text, but visuals" (ie. thinking in images).
I'd guess the former, since it wouldn't be economical to generate transient images, just for thinking. But I'm not sure why they'd highight that in that case. If it were the second thing, that'd be extremely interesting. The first model not to think in text.
So many different companies are going to have similarly powerful ai that there will be no moat around it and it will be cheap. They will never earn their investment back.
That said, there's nothing like the real thing.
The risk is something like the railroad bubble and the dotcom. Over-investement, circular revenue and a timeline that doesn't work.
Or, maybe it'll work out.
And further down the line in chips, which is why Elon is building a fab now.
There are plenty of capable models on HuggingFace, yet I have no way of running them.
If the average user gets convinced they could run LLMs for cheap at home, you cannot trap users in your walled garden anymore.
I was saying this for years about Tesla’s FSD - they finally had to give in and drop the price to stay competitive.
At least he says he's doing that. It doesn't really make sense since you're not going to achieve an advanced node from a standing start in a practical time frame and cost.
Sounds like more Musk flavored vapor.
They already announced a partnership with Intel.
There is no objective evidence of anything you’ve said. It isn’t even clear if AI has contributed positively to global economic growth. It reminds me a lot of the late 90s and the dot-com mania. Slapping a domain on a commercial would make your stock go up even if there was no substance to any of it.
The real shame is this mania drowns out serious, practical use cases because when the bubble collapses, the market will throw the baby out with the bathwater.
Major analytical errors in their response to multiple of my technical questions.
Otherwise you're doomed to "sample size of one" level of relevance.
(I'm not using it as I'm not agreeing to their ad terms).
- Hacker News Guidelines https://news.ycombinator.com/newsguidelines.html
While working on a web-based graphics editor, I've noticed that users upload a lot of PNG assets with this problem. I've never tracked down the cause... is there a popular raster image editor which recently switched to dithered rendering of gradients?
The result for that specific image is: 500kb. 85% decrease in size
(But today is not that day.)
Not sure what this is now.
For those reading fast, this isn't a reference to SpaceX's Grok, this is Groq.com - with its custom inference chip, and offerings like https://groq.com/blog/introducing-llama-3-groq-tool-use-mode... and https://console.groq.com/landing/llama-api
Well the original llama did kick off the era of open source LLMs. Most original open source LLMs were based on the llama architecture. And look where we are now OSS modles are very close to frontier.
It may not have benefitted Meta but it commoditizatised LLMs.
You are right though. Meta could have been in lockstep releasing ChatGPT features into some chat bot on Facebook.com but instead it seemed like their FAIR arm was hell bent on commoditising this stuff by publishing their research models before the Chinese companies took the lead in that.
It’s hard for me to be mad at FAIR even though I general disagree with the outcomes that Meta produce for their users.
Meta hasn’t fully caught up, but they came close and I think can solidly claim to be a frontier lab again. I’d call it a 3.5 horse race right now, and hopefully their next model improves. More model competition is good!
Poor Grok 4.2 should probably be dropped from the table.
How does one get their hands on these models? They are not open-source, right? I go to meta.ai, but it's just a chat interface---no equivalent to codex or claud code? Can you use this through OpenCode? Is meta charging for model access, or is the gathering of chat data a sufficiently large tithe?
from Facebook Newsroom: https://about.fb.com/news/2026/04/introducing-muse-spark-met...
Note: I'm expressing some skepticism here largely due to how recent rollouts from Meta flopped. Sincerely hoping that they do better this time around!
Especially, looking at these numbers after Claude Mythos, feels like either Anthropic has some secret sauce, or everyone else is dumber compared to the talent Anthropic has
I think it’s unrealistic to expect them to come back from that pit to the top in one year, but I wouldn’t rule them out getting there with more time. That’s a possible future. They have the money and Zuckerberg’s drive at the helm. It can go a long way.
If they actually matched Opus 4.6 on such a short timeline, it would have been mighty impressive. (Keep in mind this is a new lab and they are prohibited from doing distills.)
Meta's performance process is essentially "show good numbers or you're out." So guess what people do when they don't have good numbers? They fudge them. Happens all across the company.
Might as well not release anything.
Their whole "training the LLM to be a person" technique probably contributes to its pleasant conversational behavior, and making its refusals less annoying (GPT 5.2+ got obnoxiously aligned), and also a bit to its greater autonomy.
Overall they don't have any real moat, but they are more focused than their competition (and their marketing team is slaying).
Yup, it's called test-time compute. Mythos is described as plenty slower than Opus, enough to seriously annoy users trying to use it for quick-feedback-loop agentic work. It is most properly compared with GPT Pro, Gemini DeepThink or this latest model's "Contemplating" mode. Otherwise you're just not comparing like for like.
Why can't others easily replicate it?
This article is about Meta, not about the user. Who signs off on these? Is the intended audience other people at Meta, not the user?
They want to 1) attract talent, 2) tell wall street they can play in this space as well, 3) help employees feel the company is moving in the right direction.
A frontier LLM doesn't apply to their core consumer products.
Just a speculation, I have no real knowledge about it.
Many labs aren't able to keep up with the frontier, xAI, Mistral
It’s like someone negotiating by saying, “I’ll waste even MORE money to build something worse if you don’t give me a deal.”
I’m not discounting there may be other advantages to doing it. I just don’t think negotiating is one.
Do we have data to substantiate that claim?
Both Spud and Mythos can also scale via inference time compute.
Meta simply did not have enough compute online, long enough ago, to have a similar PT.
Do we have data to substantiate that claim?
It's not just about LLMs, it's about being able to model consumers and markets and psychology and so on. Meta is also big in the manipulation side of things, any sort of cynical technological exploitation of humans you can imagine but that is technically legal, they're doing it for profit.
I can think of at least two reasons. Price and customizability. If they train their own models on their own data, they potentially have a better model at a better price, and they're not at the mercy of Anthropic's decisions when they decide to raise prices. Additionally, if you use someone else's model, you use it the way they create it and permit you to use it. In a couple years, who has any idea how these models are used. Arguably, a company the size of Meta should be in control of their AI models.
1) meta was doing this at scale before openAI
2) decent ML is critical to catagorising content at scale, the more accurate and fast the category, the finer the recommendations can be (ie instead of woman, outside as a tag for a video, woman, age, hair colour, location, subjects in view, main subject of video, video style) doing that as fast as possible with as little energy as possible is mission critical
3) The llama leak basically evaporated the moat around openAI who _could_ have become a competitor
4) for the AR stuff, all of these models (and visual models) are required to make the platform work. They also need complete ownership so that it can be distilled to make it run on tiny hardware
5) dick swinging
6) they genuinely want to become a industrial behemoth, so robots, hardware, etc are now all in scope.
Or any quality control (people missing posts)
Or banning the people who should be banned while leaving everyone else alone
This is Zuck: https://news.ycombinator.com/item?id=4151433 or https://news.ycombinator.com/item?id=10791198
Secondly though, I think it has to do with the fact Meta is big enough to worry about vertical integration and full control of their business.
The whole reason they've been trying to make AR/VR happen for over a decade now is the assumption of a worst case and best case scenario. The worst case is Apple and Google wants them gone. This isn't as far fetched as it seems, Google has historically been Meta's biggest competitor and even tried to release its own social network back when Meta was threatening them. If either pulls Meta apps from their respective stores, it'd be an immense blow to Meta; their whole trillion-dollar business depends on competitor's platforms.
Meta tried making inroads into the phone business but failed; it is a very crowded market after all. So they changed their strategy. Instead of playing catch-up, they'd invent "the next iPhone" and be the first to a brand new market. This is the best case scenario; they invent a new platform where they can be dominant from day 1 and stop depending on competitor's hardware, not only removing that risk factor for them, but also unlocking a new market they can control.
AI ties into all this because it appears to be key for this next platform to happen. You will communicate with these smart glasses via voice, hand gestures, or subtle movements that a model will have to interpret. The features that could make them stand out as more than just a screen on your face are all AI related; object detection, world understanding, context awareness, etc. If all this were done via a 3rd party Meta would effectively be back on square one: a competitor could easily yank away its model access, or sell it to a competitor. Meta would be again at the mercy of others.
Compared to other big-tech players, I think it's easy to see how Meta is in a riskier position. There's little Google or Microsoft can do to kill the iPhone. There's little Apple or Google can do to kill Amazon's online store. There's little Amazon or Apple can do to kill Microsoft's business deals. Google and Meta are primarily in the business of capturing people's data, attention, and selling ads, and both Google and Apple could do quite some damage to Meta. Beyond expanding it, it's important for them to invest in ways to protect their money-printing machine.
But he has to do it anyways, otherwise Meta can be disrupted easily.
Google, Apple has hardware, distribution channels for their products
Amazon has the marketplace and cloud
Microsoft has enterprise and cloud
Meta is always looking for ways to stay afloat
They are worried something like Sora can disrupt them quickly
What could have been interesting has been reduced to simply another subpar LLM release.
The goal of public companies is generally to generate profit for their investors.
I Googled it and found absolutely nothing.
Well, to be honest, I got 100% of websites containing the French word "boîtier" (box) with a typo.
Even on Google Scholar, the closest match is "BioTiER (Biological Training in Education and Research) Scholars Program", which is at least 10 years old and has nothing to do with that.
Is that an AI-generated image with an AI-generated name that has no physical existence?
"this is step one. bigger models are already in development with infrastructure scaling to match. private api preview open to select partners today, with plans to open-source future versions. incredibly proud of the MSL team. excited for what’s to come!"
https://x.com/alexandr_wang/status/2041909388852748717
Love to see it. Cheers!
Finding a little bit tricky to evaluate because the harness is unfortunately very, very bad (e.g. search is awful). Can't wait to try this in some real external services where we can see how it performs for real.
Definitely getting ordinary high-quality results, overall. But hard to test agentic behavior and hard to test prose quality, even, when just working off of the default chat interface.
One thing that stands out is that _for_ the quality it feels very, very fast. Perhaps it's just only very lightly loaded right now, but irrespective it's lovely to feel.
I'm quite impressed with the tone overall. It definitely feels much more like Opus than it does, like, GPT or Grok in the sense that the style is conversational, natural and enjoyable.
I don't like that I need to login to my FB/Instagram account to access this.
If spark beats opus 4.6, why is meta wasting money on opus internally?
https://news.ycombinator.com/item?id=47538795
It nailed all the ChatGPT meme gotchas (walk to the carwash, Alice 50 brothers, upside down cup, R's in strawberry, which number is bigger, 9.11 or 9.9?)
I guess all that money poaching OpenAI / Anthropic talent went somewhere...
Now, would I use "Meta Muse Code" or "Muse CoWork" if I have to have a facebook account to all of my developers? Maybe not.
Would I use it via an API key? I might, depends on the pricing!
I doubt its better than Opus and even if it was its not worth the privacy concerns.
If Meta wants to be seen as a cutting edge massive lab they need to come across as one instead of looking like a school project version of a frontier model.
I’m trying to decide is I find the doublespeak a bit offensive or not.
Also, I think people aren't used that using such models requires meta.ai or meta ai app.
The impressive part is multimodality, very plausible since there's less focus there by other labs (especially Anthropic)
So does cooperation in any framework that values public good over pure obedience to an inherently-abusive late stage capitalism. I know that's passé in a world where the US government no longer believes in funding science, and yet.
Competition is also inherently wasteful. And if you're talking about wasting a few K or a few Mil here or there, fine, whatever. But here we're talking about waste on the order of trillions of dollars at the end of the day.
Not my loss, will keep using DeepSeek then. Wake me up when my country is no longer in the wrong/right side of history.
The same is true with any other model, unless otherwise stated.
In the next few days, we'll see who Meta has paid to promote this model on social media.
Edit: nvm I can't read, regular benchmarks against SOTA are there
Maybe they need to mine more libra coin first? or is it diem now? is that even still part of meta?
I'm sure this new AI is super intelligent and super awesome and will be writing all the code, making all the blog posts, and generating all our youtube shorts in 6 months.
yeah, the metaverse got abandoned. Also: Meta was the only one to try the concept for the past X-umpteen years even though everyone in the industry ga-gas over virtual reality worlds and workplaces at every opportunity. It's literally Meta and Linden Labs (which has been on life support for 10+ years.)
The alternative is : no one does it and nothing gets abandoned, which the industry has shown itself to be exceedingly good at w.r.t VR for the past 40+ years.
To be clear: I have no faith in meta as a company; my problem lies in kicking an entity because they attempted something different.. I don't think that's productive, and it produces stuff like the past AI winters because groups get afraid of touching experimental concepts ever again lest they incur the wrath of the shareholder.
We keep seeing things being overhyped, with not much thought behind it. Meta is particularly bad about it. They changed their name for the hype of their VR product, when VR was still niche and had a long way to go, and still does. They couldn't even figure out legs for launch.
Now they have a 'superintellegence'? Yeah, that sounds like just the latest in a line of bullshit. Why would this be different.
https://news.ycombinator.com/newsguidelines.html
https://en.wikipedia.org/wiki/Diem_(digital_currency)