just like for american models, the political bias is irrelevant. Realistically you are using the model for its reasoning capabilities, not its answer to “what happened in Tiananmen”.
Doing the Lord’s work in the Devil’s basement
just like for american models, the political bias is irrelevant. Realistically you are using the model for its reasoning capabilities, not its answer to “what happened in Tiananmen”.
Yeah it’s ridiculous. GPT-4 serves billions of tokens every day so if you take that into account the cost per token is very very low.
So for the first 20 years or 2/3 of the entire history of the company, they were unprofitable or barely profitable.
We must have a wildly different definition of “barely profitable”. Half a billion in 2004 money is a lot of profit, a billion back to back in 2009 and 2010 is a lot of profit.
I think you’re confusing Amazon with the next generation of loss-leader companies. Let’s talk Uber, let’s talk Twitter, if we want to point at “hugely unprofitable” companies. But Amazon is a beast of its own, they have a very coherent financial story. Even during their money-losing decade they posted insane results, frequently multiplying revenue while barely increasing operating costs.
Oh thanks for clarifying in even more excruciating details how a subtraction works that is really helpful.
Why would you repeat the lie that they’re “usually unprofitable” when the information is publically available in a million places on the internet ? In 2023 Amazon made :
Amazon is factually not “usually unprofitable”, they have in fact made profit (as in money which actually goes into your pocket after discounting all expenses) every year for the last 15 years except in 2022 and some tiny losses in 2014 and 2012.
Thanks for clarifying that profit is calculated using a subtraction, but you’re missing the core of my comment. Amazon self-finance their R&D and STILL make a fuck load of profit. They made like 30B$ of free cash last year alone. In the last 15 years they’ve made >100B$ in overall profit and only been in the red twice.
They’re not just profitable they’re an insane money printing machine that doesn’t show any sign of slowing down.
There’s absolutely no doubt that lower-end models are going to keep improving and that inference will keep getting cheaper. It won’t be on a Raspberry but my money’s with you. In 6 years you’ll be able to buy some cheap-ish specialized hardware to run open models on and they’re gonna be at least as capable as today’s frontier models while burning a fraction of the energy.
In fact i wouldn’t be surprised if frontier models were somehow overtaken by vastly cheaper models in the long run. The whole “trillion parameter count” paradigm feels very hacky and ripe for radical simplification. And wouldn’t it be hilarious ? All those suckers spending billions building a moat only to see it swept under their feet.
That’s a long established myth. Amazon started out in 94, and became profitable in like 10 years. Most of their hardcore R&D is self-financed cause they generate just that much free cash.
Last year it was APIs
Hahaha the inane shit you can read on this website
If you take into account the optimizations described in the paper, then the cost they announce is in line with the rest of the world’s research into sparse models.
Of course, the training cost is not the whole picture, which the DS paper readily acknowledges. Before arriving at 1 successful model you have to train and throw away n unsuccessful attempts. Of course that’s also true of any other LLM provider, the training cost is used to compare technical trade-offs that alter training efficiency, not business models.