America's $1T AI Gamble

(apricitas.io)

56 points | by m-hodges a day ago ago

82 comments

  • vessenes 19 hours ago ago

    This is a good analyst report - lots of data. Conclusion - firms are spending ahead of sustained revenues right now, and a lot of the money is going offshore to TSMC, basically.

    I’m not certain of the conclusion - I think a lot depends on amortization schedules - if data centers are fully booked right now, then we don’t need very long amortization schedules at the reported 60+% margin on inference to see this capex fully paid off.

    My prior is that we are seeing something like 1/10,000th or so of the reasonable inference demand the world has fulfilled. There’s a note in the analysis that might back this - it says that we are seeing one of the only times ever where hardware prices are rising over time. Combined with spot prices at lambda labs (still quite high I’d say), it doesn’t look like we’re seeing a drop in inference demand.

    Under those circumstances, the first phases of this bet, cross-industry, look like they will pay off. If that’s true, as an investment strategy, I’d just buy the basket - oAI, Anthropic, GOOG, META, SpaceX, MSFT, probably even Oracle, and wait. We’ll either get the rotating state of the art frontier capacity we’ve gotten in the last 18 months, or one of those will have lift off.

    Of those, I think MSFT is the value play - they’re down something like 20% in the last six months? Satya’s strategy seems very sensible to me - slow hyperscale buildouts in the US (lots of competition) and do them everywhere else in the world (still not much competition). For countries that can’t build their own frontier models, the next best thing is going to be running them in local datacenters; MSFT has long standing operational bases everywhere in the world, it’s arguably one of their differentiators compared to GOOG/META.

    • scrollop 19 hours ago ago

      If a different architecture to LLMs is invented (that could actually "think", that could potentially reach AGI), then perhaps it would be more efficient than LLMs. Perhaps LLMs can make themselves more efficient. They can't even remember "properly". Hallucinations cripple them for serious, professional uses. If they may hallucinate 5% of the time and you are asking mission critical queries, that's a problem.

      Perhaps all of these data centers won't be needed. At least not by some of the current AI companies that won't keep up. If that happens to OpenAI, that would be quite a shock to the financial system (and GDP).

      Microsoft's changes to windows have alienated some of their userbase. Copilot is poor compared to it's rivals. There's a reason they are down 20%. Linux adoption use is accelerating (still too low!).

      And don't forget AI on device. When it becomes "good enough" for most tasks, data centre use will reduce.

      With the talk of Nvidia backtracking and saying they won't invest 100 billion in OpenAI, Oracle in a poor financial position with the loan's for it's upcoming data centres becoming more expensive and dubious (they could fail to pay them)- the picture isn't as positive as you make it out to be. Which makes me think that you have an ulterior motive.

      • vessenes 18 hours ago ago

        I mean I just said in my post the investment strategy that makes sense to me. But I'm here for knowledge exchange not pumping.

        Here's the thing - we could list technical challenges / problems all day. And still, I use way more inference than a year ago. I'd use even more, a lot more, if it were faster (latency terms). I want to buy it, the providers want to sell it to me. So, your statement "hallucinations cripple them for .. professional uses" is just incorrect. The correct statement is "despite hallucinations, professional use is skyrocketing." Openclaw has like 150,000 GitHub stars in the last month. People are using inference at all levels of society.

        I propose to you that if in fact we get some sort of AGI that is 10,000x more compute efficient than transformer architectures, then datacenter investment losses will no longer matter in a material way to almost anyone in the world. So, you might be right, but you've already got that 'trade' or 'return' banked -- cheap ubiquitous AGI as you propose might happen will provide broad benefits. In those terms you're sort of doubling up on your short by not getting some upside exposure to the long.

        Re: MSFT, yep, it's a contrarian position. That said, I'm interested in informed short perspective on MSFT - do you think that the loss of windows licensing revenues would offset the benefit of being the world's "safe" local AI datacenter provider? And, are you sure that there is even a reduction in windows licensing? Satya's comments in a recent interview made it sound like they see agentic usage multiplying windows licenses -- basically when you spin up a web agent, it will lease a windows license to run the browser -- and parallel agents = multiple simultaneous leases - so they are seeing more and more revenue shift to azure in this world, away from direct license for desktops. To me, it feels like this will could be an incredible new era of platform lock-in for them - the azure stack is the only way to safely run gpt5 in a nationally protected datacenter - and oh, by the way, once you're signed up, one contract gets you a full MS software license.

      • nradov 18 hours ago ago

        The physical data centers including power, cooling, and fiber connectivity will be needed. Demand for compute capacity in some form is effectively infinite. But the current generation of CPUs / GPUs / TPUs inside those data center racks might turn out to be worthless if another disruptive innovation comes along.

  • mg 19 hours ago ago

    My napkin-math approach to get a bird's eye perspective on the situation:

    A $1T investment needs to produce on the order of $100B in yearly earnings to be a good investment.

    Global GDP is about $100T.

    So one way for things to work out for the AI companies would be if AI raises GDP by 1% and the AI companies capture 10% of the created value.

    • louiereederson 18 hours ago ago

      At some point AI may deliver the level of net economic benefit you reference, but it's not entirely clear that we're there yet.

      Right now much of the direct monetization occurs via OpenAI and Anthropic, who together have around $30B in annualized revenue. They are burning cash like crazy, though admittedly have potentially sustainable unit economics (gross margins around 40-60% before revenue share).

      However, they need to spend a huge chunk of revenue on training. OpenAI spent something like $9b on training against around $13-14b in rev in 2025 (different from annualized rev) according to The Information. Anthropic's mix is supposed to be similar. Also implies a lot (maybe majority) of their compute spend is training.

      If scaling laws falter, what happens to training spending? What happens to competitive degree of differentiation given Chinese open source models are a few months behind frontier? Then what happens to margins? It is very fragile.

      • mg 18 hours ago ago

        The earnings do not need to come via direct monetization.

        Google search revenue for example was over $200B in 2025. This revenue will be tightly coupled to the quality of their AI models in the future.

        • goalieca 17 hours ago ago

          Googles search revenue comes from ads which depend somewhat on the quality and speed of the search result. Yeah, a better LLM could do it but a better pagerank with NLP that actually works again could do it.

          • mg 17 hours ago ago

            Are you located in a country where Google does not yet show AI answers?

            In most countries, AI answers are the central aspect of Google now. Not the ranked pages.

            • goalieca 15 hours ago ago

              I use ddg instead of google but it does show answers. I don't go to google for chatbots, i do go to find answers and more than not i find myself unsatisfied with the LLM answer so i end up diving past SEO spam (also LLM written these days) to find where i need to go. It's very frustrating and i'm feeling very pessimistic about the future of the web. It seems to be atrophying.

    • sottol 17 hours ago ago

      If I'm mistaken, then the article states that the investment is $1T annualized when taking software development costs into account [1] if the labs don't all suddenly decide to stop development.

      That would mean earnings of ~ $1.1T would be required on that investment annually, so maybe on $2T of revenue, capturing 2% of the global GDP - so I'd estimate that GDP would need to go up more like 5-10% to justify this.

      [1] https://substackcdn.com/image/fetch/$s_!Gf2t!,f_auto,q_auto:...

    • nradov 18 hours ago ago

      That reminds me of "Chinese marketing" strategy by a lot of Western companies 30 years ago when their economy first opened up. There are billion people in China so if we can capture just 1% market share there then we'll make a fortune, right? Spoiler alert: it (mostly) didn't work.

    • bryanlarsen 18 hours ago ago

      10% capture seems highly unlikely. That level of capture is only possible for b2b high touch sales, aka "call-me" pricing.

      For call-me pricing to work, you have to ensure that any sort of public sticker price is not a suitable alternative. You can not have a sticker price, make the sticker price so high essentially nobody will buy it or by finding a feature like oauth that makes the public version infeasible for businesses.

      And then you also have to maintain enough of a monopoly / oligarchy to sustain that level of pricing.

      I don't think either of those two conditions will apply in the future.

      AI providers now have a sticker price that provides basically all functionality, almost completely eliminating the opportunity for extremely high-margin b2b. They've decided a small slice of a large pie is bigger than large piece of a smaller pie. I suspect that's true and will continue to be true in the future.

      An oligarchy is difficult to sustain with more than 3 global players. Right now we seem to have 3 frontier models for coding that can and will charge more than commodity prices. However there are open source non-frontier models that you can use for inference costs only and even if those don't keep up it seems likely there will be enough non-frontier models available that their pricing will also be at the commodity level. Those cheaper models will provide significant downward pressure on frontier pricing.

      • mg 17 hours ago ago

        I don't think we have seen "all functionality" yet.

        We have not seen iterative AI use for example.

        The use case, where you tell the model "Solve this task. Then solve it again. Keep the better solution, then solve it again. On and on. Tomorrow, show me the best solution.".

        And also not the "Run a company on your own" use case.

        Those might make people and companies use models full-time. The price of that will be way different from current subscription prices. The TCO of a single instance of a SOTA model is on the order of $100k per year.

        • bryanlarsen 17 hours ago ago

          I believe that you're arguing "1% GDP increase due to AI is too conservative" rather than against "capturing 10% of the value increase is possible".

      • bryanlarsen 18 hours ago ago

        I think more realistic napkin map is 10% GDP bump and 1% capture. You'll still find a lot of people who think we're going to get more than a 10% GDP bump from AI, but it'll definitely be fewer.

        Will AI increase the rate of GDP growth by 0.5% or so over 20 years?

  • WarmWash 19 hours ago ago

    AI plans are not going to stay at $20/mo.

    People will go to alternative models, but it likely will be as popular as Linux.

    • pyrophane 19 hours ago ago

      Yeah, this is something I am thinking a lot about. Companies won't be able to sustain this level of spending forever, and one of two things will need to happen:

      1. Models become commodities and immensely cheaper to operate for inference as a result of some future innovation. This would presumably be very bad for the handful of companies who have invested that $1T and want to recoup that, but great for those of us who love cheap inference.

      2. #1 doesn't happen and the model providers start begin to feel empowered to pass the true cost of training + inference down to the model consumer. We start paying thousands of dollars per month for model usage and the price gate blocks out most people from reaping the benefits of bleeding-edge AI, instead being locked into cheaper models that are just there to extract cash by selling them things.

      Personally I'm leaning toward #1. Future models near as good as the absolute best will get far cheaper to train, and new techniques and specialized inference chips will make them much cheaper to use. It isn't hard for me to imagine another Deepseek moment in the not-so-distant future. Perhaps Anthropic is thinking the same thing given the rumors that they are rumored to be pushing toward an IPO as early as this year.

      • WarmWash 19 hours ago ago

        Back of the envelope calculations point to $60-$80/mo plans for 5-10y payback period.

        This also fits with OpenAIs announced advertising cost, and is something most consumers can stomach.

    • sambull 19 hours ago ago

      That why they need widen the moat; it appears not giving us access to hardware might be that moat.

      They desperately need LLMs to stay rentier and hardware advances are a direct attack on their model.

    • general1465 19 hours ago ago

      Economics will be decisive force. Paying 1000USD a month for AI or buying server for 10kUSD, loading there Chinese AI model which can do 90% of what SOTA models can? Looks like a no brainer.

      • nebula8804 19 hours ago ago

        Man if China can catch up on the hardware front we could be seeing the 'TikTok' story repeat there. (They provide a better product>US govt panics> bans the US from the good stuff)

    • scrollop 19 hours ago ago

      Yeah, they'll be free - on device and "good enough".

      If you want the best, then pay.

    • wslh 19 hours ago ago

      Possibly, but that assumes continuity. New math and algorithmic breakthroughs could make much of today’s AI stack legacy, reshuffling both costs and winners.

    • co_king_3 19 hours ago ago

      I don't know about you, but I benefit so much from using Claude at work that I would gladly pay $1,500-$2,000 per month to keep using it.

      • galleywest200 19 hours ago ago

        That is more than one month rent for most of the world. Most people are simply not going to pay this.

        • wongarsu 19 hours ago ago

          My rent is less than that. But if you add up salary, payroll taxes, benefits, social security etc my employer still spends around four times that amount on employing me. More if you include misc overheads associated with having one more employee. Personally I could never afford 1500-2000€/month for dev tooling, but my employer should rationally be willing to spend that for anything that makes me more than 25% more effective.

          I'm not sure today's Claude Code could ask for that. But I don't think it would be a crazy goal for them to work towards

          • sarchertech 19 hours ago ago

            There have been many many productivity improvements over the last 50 years that provided more than a 25% boost. I’ve yet to see an employer pay that much per employee for any of them.

            Also a 25% boost per individual doesn’t necessarily equal a 25% boost to the final output if there are other bottlenecks.

        • co_king_3 19 hours ago ago

          Well then I'm sorry but unfortunately they are going to be left behind.

          People who are cut out to be software developers can afford the means of production.

          • ekjhgkejhgk 19 hours ago ago

            The people who own "the means of production" isn't you.

          • throwaway77385 19 hours ago ago

            Things that can only be used by an exclusive elite don't tend to survive, unless we're talking super-yachts.

            AI is only going to work if enough people can actually meaningfully use it.

            Therefore, the monetisation model will have to adapt in ways that make it sustainable. OpenAI is experimenting with ads. Other companies will just subsidise the living daylights out of their solutions...and a few people will indeed run this stuff locally.

            Look at how slow the adoption of VR has been. And how badly Meta's gamble on the metaverse went. It's still too expensive for most people. Yes, a small elite can afford the necessary equipment, but that's not a petri dish on which one can grow a paradigm-shift.

            If only a few thousand people could afford [insert any invention here], that invention wouldn't be common-place nowadays.

            Now, the pyramid has sort of been turned on its head, in the sense that things nowadays don't start expensive and then become cheaper, but instead start cheap and then become...something else, be that more expensive or riddled with ads. But there are limits to this.

            > People who are cut out to be software developers

            You mean the people AI is going to replace? What's the definition of 'cut out to be' here?

            • bigbadfeline 10 hours ago ago

              > Now, the pyramid has sort of been turned on its head,

              It has, and the financial system enables that, the self-restraint that was promised at the time of gutting Glass-Steagall never materialized.

              > But there are limits to this.

              Yes and no.

              Yes, because there are limits as there's a limit to the load placed on a ship, if you overload it with some cargo, something or somebody must be thrown overboard in order to preserve the ship.

              No, because those who're responsible for loading and overloading the ship are the ones commanding and steering it. When they overload the ship they get to throw you overboard and keep your stuff too... there's nothing to compel them to stay within limits and everything to tempt them to do the opposite.

              We've been already thrown overboard with regard to hardware purchases, that will spread to other areas with the BS AI excuse.

              So many comments, other than yours, around here engage in deep thinking about the profits or losses of Captain Ahab... they miss the point entirely.

              • throwaway77385 3 hours ago ago

                Salient point, regarding the 'no' bit. I agree completely. But since I was responding to a likely troll, there wasn't much point elaborating further. Thanks for the added information :)

          • flir 19 hours ago ago

            A $2k/month model, should it ever arise, won't need you.

            • Octoth0rpe 19 hours ago ago

              I haven't looked at a cost analysis recently, but it's possible that we basically already have $2k/month models, if they were priced to be even slightly profitable.

          • mirsadm 19 hours ago ago

            Sure they can also code without the help of a model, probably not that much slower.

          • ekjhgkejhgk 15 hours ago ago

            LOL you think you own the means of production?

            People who own the means of production own the company, which hires the board which hires the CEO who hires the executives who hire the manager who hires you. You think the people who own the means of production code? If you code you're closer to a bricklayer than to owning anything.

          • Waterluvian 19 hours ago ago

            Your identity as real software developer relies on the community's broad, inclusive definition of what it means to be one. Something you're failing to extend to others.

            To be sitting that far out on a limb of software development while sawing at the branches of others is quite an interesting choice.

          • mrbungie 19 hours ago ago

            Pretty edgy response. I'd say trying to scale in price rather than in quantity is a bad business strategy for tech period, specially if you hope to become Google-sized like OpenAI and company want.

          • actionfromafar 19 hours ago ago

            Are you OpenAI? If not, you don't afford the means of production. You're the sharecropper.

          • vultour 18 hours ago ago

            This is such a hilarious out of touch SV techbro comment I can't believe it's real. You're a monkey with a computer that knows how to Google, there's an endless amount of people who can replace you.

          • undefined 17 hours ago ago
            [deleted]
          • DJBunnies 19 hours ago ago

            Big yikes bro.

      • nightski 19 hours ago ago

        At that cost I'd just buy some GPUs and run a local model though. Maybe a couple RTX 6000s.

        • organsnyder 19 hours ago ago

          That's about as much as my Framework Desktop cost (thankful that I bought it before all the supply craziness we're seeing across the industry). In the relatively small amount of time I've spent tinkering with it, I've used a local LLM to do some real tasks. It's not as powerful as Claude, but given the immaturity in the local LLM space—on both the hardware and software side—I think it has real potential.

          Cloud services have a head-start for quite a few reasons, but I really think we could see local LLMs coming into their own over the next 3-5 years.

        • gbnwl 19 hours ago ago

          Same but I imagine once prices start rising the prices of GPUs that can run any decent local models will soar (again) as well. You and I wouldn’t be the only person with this idea right?

          • general1465 19 hours ago ago

            I mean, will it? I would expect that all those GPUs and servers will ends up somewhere. Look on old Xeon servers, it all ended up in China. Nobody sane will buy 1U serve home, but Chinese has recycled these servers by making X99 motherboards which takes RAMs and Xeon CPUs from these noise servers and turning into PCs.

            I would expect that they could sell something like AI computers with lot of GPU power created from similar recycled GPU clusters ussed today.

        • fishpham 19 hours ago ago

          Those won’t be sufficient to run SOTA/trillion parameter models

          • Zambyte 19 hours ago ago

            And most tasks don't demand that.

          • general1465 19 hours ago ago

            Distilled models are good enough.

      • clownpenis_fart 19 hours ago ago

        I use my brain, it's free

        • co_king_3 19 hours ago ago

          Fitting response for an account called "clownpenis_fart".

          The future is here and it's time to stop ignoring it.

          Your analog 1x productivity is worthless in comparison to my AI backed 10x productivity.

          • sarchertech 19 hours ago ago

            10x productivity means you should have had time to build an your own programming language/OS/integrated dev environment or something equally impressive. Can you link to it?

          • throwaway77385 19 hours ago ago

            Yuck.

            https://news.ycombinator.com/newsguidelines.html

            > Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

            > Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

            > When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."

            Honestly, if you just made your profile a day ago to yell overly confident and meaningless statements into the void, like a Mandrill in the jungle trying to shout over all the others, go back to LinkedIn, they like that kind of stuff there.

            I even agree that AI has a place in our world and can greatly increase productivity. But we should talk about the how and why, instead of attacking others ad hominem and just stopping any discourse with absolutist nonsense.

          • Der_Einzige 18 hours ago ago

            [flagged]

  • fastball 19 hours ago ago

    A significant part of the capex is just energy, so even if there is some sort of AI black swan event and the data centers become obsolete overnight (unlikely), energy is literally the root of all bounty so it is good that something is incentivizing increased resource allocation in that area.

  • dfilppi 19 hours ago ago

    [dead]

  • xyst 20 hours ago ago

    And this gamble is paid for by American taxpayers, increased cost of utilities, and multibillion dollar corporations receiving tax breaks/subsidies from the cities/counties they build in.

    This country is so awful. Great if you are rich. Awful if you are not in this top 0.01-1%.

    A massive $79T has been transferred from bottom 90% to top 1% since the 1970s. [1]

    [1] https://www.rand.org/pubs/working_papers/WRA516-2.html

    • jryan49 20 hours ago ago

      I love how they say with a straight face that when AI takes over they will finally share all the fruits of capital with us.

      • BosunoB 19 hours ago ago

        Y'all gotta stop looking at politics this way.

        You know why they don't share the fruits of capital with us now? Because Americans hate getting taxed to pay for welfare, and so they've been voting against taxes for 50 years. This whole political landscape changes when people lose their jobs to AI, a thing that everyone thinks should be taxed. In fact, the entire ideological underpinning behind extreme wealth accumulation is gone when AI runs everything.

        • jryan49 11 hours ago ago

          Works great in other countries with high unemployment. That's exactly what happens! People elect a person who says they are going to change everything to fix it and they never get around to it for some reason :)

          Just look at the mess south America is in...

      • scrollop 19 hours ago ago

        Saw a video summarising this on gamersnexus today and it is nauseating - especially Jensen.

      • rozap 19 hours ago ago

        The French had a tool for this problem.

        • sQL_inject 19 hours ago ago

          And look how it has worked out for them.

          • organsnyder 19 hours ago ago

            Not sure what you're implying, but I'd say their society is doing fairly well.

          • webdoodle 19 hours ago ago

            Only because the U.S. and U.K. conspired against them. The French did everything they could to keep the fire burning, by hosting people from various countries to teach them about revolution. Organizing globally against the rich parasites was hard and expensive back then. Now the only hold back, is that the rich parasites own most of the internet.

            But WE BUILT IT, and can take back the internet when we finally realize it's not dems vs reps, but rich vs poor. It's always been a class war, they just are much better at keeping us distracted.

            • sarchertech 19 hours ago ago

              I think we need reforms and I’m very much against the accumulation of power that we’ve allowed the billionaire class.

              But the French Revolution is nothing to emulate. If you’ve read the history of the French Revolution you know that it quickly moved on from rich parasites to murdering and imprisoning people over minor philosophical differences and real or lack of perceived lack of enthusiasm for continued murder. And it eventually led to global war and attempted global conquest.

            • gulfofamerica 18 hours ago ago

              [dead]

        • ericmay 19 hours ago ago

          Yes, guns, clubs, fire, and steel weapons. And afterward they had the Reign of Terror, and the rise of the French Emperor Napoleon. It seems like it mostly worked out in the long run, though subsequent World Wars left the French Empire as a weakened shell of itself. In the short term, up until Napoleon was finally taken down by the combined British and Prussian forces at Waterloo, it seemed to have led to all sorts of calamities. How many died? How many did Robespierre manage to get sentenced to death before he met the same fate? Would Napoleon have risen and caused the death of so many?

          One thing would-be revolutionaries don't appreciate is that, well, similar to Mr. Putin's experience today, revolutions (and wars) are much easier to start than to control. One day you're chopping off the leader's head, the next day you are pressed into military service and your Constitution is gone. I personally would rather be patient and work on reforming institutions, even if it takes a much longer time. Often times when we get rid of them, it's not that something better fills the void, as anarchists (communists or libertarians alike) like to claim, but instead it's nothing and that capability is gone until some calamity restores the need.

          • sarchertech 19 hours ago ago

            Exactly this. Violent revolutions are very rarely successful in increasing the average welfare and freedom of the populace.

        • reducesuffering 19 hours ago ago

          Good luck using guillotines on an army of militarized drones outnumbering you 10 to 1.

    • coffeemug 20 hours ago ago

      To be intellectually honest about it, you have to answer a bunch of questions:

      1. Awful compared to what? 2. Was there an equivalent transfer outside America? 3. What is the cause? What ratio rent-seeking/shady activity vs a consequence of natural forces (e.g. technological change)

    • throwmeaway820 19 hours ago ago

      > A massive $79T has been transferred from bottom 90% to top 1% since the 1970s

      This assertion is based on comparing reality with a counterfactual where income distributions remained static from 1975 to the present. Real median personal income roughly doubled over this time period.

      The use of the word "transferred" seems a little intellectually dishonest here. The use of the counterfactual seems to suggest that income distribution has no relationship with growth in total income, and total income would have been exactly the same regardless of income distribution. I see no reason to assume that to be the case.

      • yifanl 19 hours ago ago

        Well you have a data point of one, so I guess we live in the best of all possible outcomes?

        • throwmeaway820 19 hours ago ago

          I don't understand what you mean by "data point of one"

          Do you think I'm talking about my own, personal income?

          I'm talking about median personal income in the United States, because the figures I found for household income only go back to 1985

    • tim333 17 hours ago ago

      There's some of that but the vast majority is paid with private sector stuff - business profits and investor money.

    • BloondAndDoom 19 hours ago ago

      If it’s any consolation, I’m rich yet the country is still shit. (Comparing to Europe as a previous immigrant of Western Europe.

      • hattmall 19 hours ago ago

        Other than a few parasitic industries it's pretty great. If we can just get some common sense reforms in insurance, healthcare, advertising, and reverse some regulatory capture it would be comparably utopic.

    • francisofascii 20 hours ago ago

      Not to mention all the land being gobbled up to build these data centers.

      • jl6 19 hours ago ago

        Of all the externalities under discussion, I think land use is a very minor one.

      • sQL_inject 19 hours ago ago

        Most of this land was low-utility anyway. You should realise it is good for the land owners to convert it to high yield output, which in turn the government can tax and return some of the gains to the people.

        What's the alternative ?