Marcus on AI

Marcus on AI critically explores the advancements, limitations, ethical considerations, and societal implications of artificial intelligence technologies. Through detailed analysis, it discusses issues like AI's understanding of math, transparency in AI development, AI risks, copyright infringement by AI, and the potential misuse of AI in various sectors.

AI Advancements and Limitations Ethics and AI AI in Society AI and Law AI for Military Strategy Environmental Impact of AI AI in Healthcare AI and Education AI and Jobs AI Policy and Regulation

The hottest Substack posts of Marcus on AI

And their main takeaways
8932 implied HN points โ€ข 23 Feb 25
  1. The U.S. was built on the idea of standing up against oppression. It's important to remember that speaking out is crucial for democracy.
  2. Recent actions by leaders are seen as frightening and could lead to more significant issues if people don't voice their concerns.
  3. Privacy is at risk, with personal information being shared without proper checks. We need to protect our rights and encourage open discussions.
10750 implied HN points โ€ข 19 Feb 25
  1. The new Grok 3 AI isn't living up to its hype. It initially answers some questions correctly but quickly starts making mistakes.
  2. When tested, Grok 3 struggles with basic facts and leaves out important details, like missing cities in geographical queries.
  3. Even with huge investments in AI, many problems remain unsolved, suggesting that scaling alone isn't the answer to improving AI performance.
10908 implied HN points โ€ข 16 Feb 25
  1. Elon Musk's AI, Grok, is seen as a powerful tool for propaganda. It can influence people's thoughts and attitudes without them even realizing it.
  2. The technology behind Grok often produces unreliable results, raising concerns about its effectiveness in important areas like government and education.
  3. There is a worry that Musk's use of biased and unreliable AI could have serious consequences for society, as it might spread misinformation widely.
Get a weekly roundup of the best Substack posts, by hacker news affinity:
3161 implied HN points โ€ข 17 Feb 25
  1. AlphaGeometry2 is a specialized AI designed specifically for solving tough geometry problems, unlike general chatbots that tackle various types of questions. This means it's really good at what it was built for, but not much else.
  2. The system's impressive 84% success rate comes with a catch: it only achieves this after converting problems into a special math format first. Without this initial help, the success rate drops significantly.
  3. While AlphaGeometry2 shows promising advancements in AI problem-solving, it still struggles with many basic geometry concepts, highlighting that there's a long way to go before it can match high school students' understanding in geometry.
7825 implied HN points โ€ข 13 Feb 25
  1. OpenAI's plan to just make bigger AI models isn't working anymore. They need to find new ways to improve AI instead of just adding more data and parameters.
  2. The new version, originally called GPT-5, has been downgraded to GPT 4.5. This shows that the project hasn't met expectations and isn't a big step forward.
  3. Even if pure scaling isn't the answer, AI development will continue. There are still many ways to create smarter AI beyond just making models larger.
7114 implied HN points โ€ข 11 Feb 25
  1. Tech companies are becoming very powerful and are often not regulated enough, which is a concern.
  2. People are worried about the risks of AI, like misinformation and bias, but governments seem too close to tech companies.
  3. It's important for citizens to speak up about how AI is used, as it could have serious negative effects on society.
8457 implied HN points โ€ข 09 Feb 25
  1. Drastic cuts to funding for science and universities could hurt America's future. Less money means fewer resources for research and education.
  2. Many talented scientists and academics might leave the country because of these funding cuts. This can damage the reputation of American universities.
  3. The decisions being made could have negative effects even on people in red states, showing that these cuts impact everyone, not just certain areas.
13161 implied HN points โ€ข 04 Feb 25
  1. ChatGPT still has major reliability issues, often providing incomplete or incorrect information, like missing U.S. states in tables.
  2. Despite being advanced, AI can still make basic mistakes, such as counting vowels incorrectly or misunderstanding simple tasks.
  3. Many claims about rapid progress in AI may be overstated, as even simple functions like creating tables can lead to errors.
14386 implied HN points โ€ข 03 Feb 25
  1. Deep Research tools can quickly generate articles that sound scientific but might be full of errors. This can make it hard to trust information online.
  2. Many people may not check the facts from these AI-generated writings, leading to false information entering academic work. This could cause problems in important fields like medicine.
  3. As more of this low-quality content spreads, it could harm the credibility of scientific literature and complicate the peer review process.
23595 implied HN points โ€ข 26 Jan 25
  1. China has quickly caught up in the AI race, showing impressive advancements that challenge the U.S.'s previous lead. This means that competition in AI is becoming much tighter.
  2. OpenAI is facing struggles as other companies offer similar or better products at lower prices. This has led to questions about their future and whether they can maintain their leadership in AI.
  3. Consumers might benefit from cheaper AI products, but there's a risk that rushed developments could lead to issues like misinformation and privacy concerns.
7074 implied HN points โ€ข 09 Feb 25
  1. Just adding more data to AI models isn't enough to achieve true artificial general intelligence (AGI). New techniques are necessary for real advancements.
  2. Combining neural networks with traditional symbolic methods is becoming more popular, showing that blending approaches can lead to better results.
  3. The competition in AI has intensified, making large language models somewhat of a commodity. This could change how businesses operate in the generative AI market.
5138 implied HN points โ€ข 11 Feb 25
  1. Sam Altman is struggling to keep OpenAI's nonprofit structure, and it's causing financial issues for the company. Investors are not happy with how things are going.
  2. Elon Musk's recent $97 billion bid for OpenAI's nonprofit has complicated the situation. Altman rejected the bid, which makes it tougher for him to negotiate a better deal.
  3. Musk's bid has raised the 'cost' for OpenAI's nonprofit to separate from the for-profit section, adding pressure on Altman and his financial plans.
8813 implied HN points โ€ข 06 Feb 25
  1. Once something is released into the world, you can't take it back. This is especially true for AI technology.
  2. AI developers should consider the consequences of their creations, as they can lead to unexpected issues.
  3. Companies may want to ensure genuine communication from applicants, but relying on AI for tasks is now common.
4703 implied HN points โ€ข 09 Feb 25
  1. Large language models (LLMs) can make mistakes, sometimes creating false information that is hard to spot. This is a recurring issue that has not been fully addressed over the years.
  2. Google has been called out for its ongoing issues with LLMs failing to provide accurate results, as these problems seem to occur regularly.
  3. The idea of rapid improvements in AI technology may be overhyped, as the same mistakes keep happening, indicating slower progress than expected.
6481 implied HN points โ€ข 05 Feb 25
  1. Google's original motto was 'Don't Be Evil,' but that seems to have changed significantly by 2025. This shift raises concerns about the company's intentions and actions involving powerful AI technologies.
  2. The current landscape of AI development is driven by competition and profits. Companies like Google feel pressured to prioritize making money over ethical considerations.
  3. There is fear that as AI becomes more powerful, it may end up in the wrong hands, leading to potentially dangerous applications. This evolution reflects worries about how society and businesses are dealing with AI advancements.
12133 implied HN points โ€ข 28 Jan 25
  1. DeepSeek is not smarter than older models. It just costs less to train, which doesn't mean it's better overall.
  2. It still has issues with reliability and can be expensive to run if you want it to 'think' for longer.
  3. DeepSeek may change the AI market and pose challenges for companies like OpenAI, but it doesn't bring us closer to achieving artificial general intelligence (AGI).
3003 implied HN points โ€ข 10 Feb 25
  1. The Paris AI Summit did not meet expectations and left many attendees unhappy for various reasons. People felt that it was poorly organized.
  2. A draft statement prepared for the summit was criticized, with concerns that it would let leaders avoid making real commitments to addressing AI risks. Many believed it was more of a PR move than genuine action.
  3. Despite the chaos, French President Macron seemed to be the only one enjoying the situation. Overall, many felt it was a missed opportunity to discuss important AI issues.
8655 implied HN points โ€ข 29 Jan 25
  1. DeepSeek might have broken OpenAI's rules by using their ideas without permission. This raises questions about respect for intellectual property in tech.
  2. OpenAI itself may have done similar things to other platforms and creators in the past. This situation highlights a double standard.
  3. There's a sense of irony in seeing OpenAI in a tough spot now, after it benefited from similar practices. It shows how karma can come back around.
4979 implied HN points โ€ข 29 Jan 25
  1. In the race for AI, China is catching up to the U.S. despite export controls. This shows that innovation can thrive under pressure.
  2. DeepSeek suggests we can achieve AI advancements with fewer resources than previously thought. Efficient ideas might trump just having lots of technology.
  3. Instead of just funding big companies, we need to support smaller, innovative startups. Better ideas can lead to more successful technology than just having more money.
6165 implied HN points โ€ข 22 Jan 25
  1. OpenAI is launching a big project called The Stargate Project, which plans to invest $500 billion to improve AI infrastructure in the U.S. Over the next four years, they hope this will help the country's economy and national security.
  2. Elon Musk is skeptical about the funding and the true financial health of OpenAI. He suggests that previous promises may not hold true and questions whether this project will really benefit the American people.
  3. There are several uncertainties about this project, like whether developing AI will actually be profitable and how it might impact jobs. People worry if the profits will help everyone or just the rich, and if the U.S. can truly keep up with China's advancements in AI.
4228 implied HN points โ€ข 27 Jan 25
  1. Nvidia's stock might be facing a big drop, which is a concern for investors. A decline over 10% indicates that something is going on in the market.
  2. The market can behave in unpredictable ways, and this uncertainty can be tough for investors to manage. Today might be a key moment in the stock market.
  3. Overall, the economics of generative AI can lead to unexpected changes, making it a wild area to watch for investors and tech enthusiasts.
4466 implied HN points โ€ข 20 Jan 25
  1. Many people believe AGI, or artificial general intelligence, is coming soon, but that might not be true. It's important to stay cautious and not believe everything we hear about upcoming technology.
  2. Sam Altman, a well-known figure in AI, suggested we're close to achieving AGI, but he later changed his statement. This shows that predictions in technology can quickly change.
  3. Experts like Gary Marcus are confident that AGI won't arrive as soon as 2025. They think we still have a long way to go before we reach that level of intelligence in machines.
7786 implied HN points โ€ข 06 Jan 25
  1. AGI is still a big challenge, and not everyone agrees it's close to being solved. Some experts highlight many existing problems that have yet to be effectively addressed.
  2. There are significant issues with AI's ability to handle changes in data, which can lead to mistakes in understanding or reasoning. These distribution shifts have been seen in past research.
  3. Many believe that relying solely on large language models may not be enough to improve AI further. New solutions or approaches may be needed instead of just scaling up existing methods.
8181 implied HN points โ€ข 01 Jan 25
  1. In 2025, we still won't have genius-level AI like 'artificial general intelligence,' despite ongoing hype. Many experts believe it is still a long way off.
  2. Profits from AI companies are likely to stay low or nonexistent. However, companies that make the hardware for AI, like chips, will continue to do well.
  3. Generative AI will keep having problems, like making mistakes and being inconsistent, which will hold back its reliability and wide usage.
5019 implied HN points โ€ข 13 Jan 25
  1. We haven't reached Artificial General Intelligence (AGI) yet. People can still easily come up with problems that AI systems can't solve without training.
  2. Current AI systems, like large language models, are broad but not deep in understanding. They might seem smart, but they can make silly mistakes and often don't truly grasp the concepts they discuss.
  3. It's important to keep working on AI that isn't just broad and shallow. We need smarter systems that can reliably understand and solve different problems.
4545 implied HN points โ€ข 15 Jan 25
  1. AI agents are getting a lot of attention right now, but they still aren't reliable. Most of what we see this year are just demos that don't work well in real life.
  2. In the long run, we might have powerful AI agents doing many jobs, but that won't happen for a while. For now, we need to be careful about the hype.
  3. To build truly helpful AI agents, we need to solve big challenges like common sense and reasoning. If those issues aren't fixed, the agents will continue to give strange or wrong results.
6205 implied HN points โ€ข 07 Jan 25
  1. Many people are changing what they think AGI means, moving away from its original meaning of being as smart as a human in flexible and resourceful ways.
  2. Some companies are now defining AGI based on economic outcomes, like making profits, which isn't really about intelligence at all.
  3. A lot of discussions about AGI don't clearly define what it is, making it hard to know when we actually achieve it.
3952 implied HN points โ€ข 16 Jan 25
  1. Large Language Models (LLMs) may increase security problems that already exist and also create new ones. It's important to be cautious as technology evolves.
  2. Keeping AI systems safe is an ongoing task that can never fully be completed. Security needs constant attention as risks change.
  3. Relying heavily on AI in everyday life could lead to serious problems. It's essential to consider the potential dangers before implementing AI widely.
5968 implied HN points โ€ข 05 Jan 25
  1. AI struggles with common sense. While humans easily understand everyday situations, AI often fails to make the same connections.
  2. Current AI models, like large language models, don't truly grasp the world. They may create text that seems correct but often make basic mistakes about reality.
  3. To improve AI's performance, researchers need to find better ways to teach machines commonsense reasoning, rather than relying on existing data and simulations.
6007 implied HN points โ€ข 30 Dec 24
  1. A bet has been placed on whether AI can perform 8 out of 10 specific tasks by the end of 2027. It's a way to gauge how advanced AI might be in a few years.
  2. The tasks include things like writing biographies, following movie plots, and writing screenplays, which require a high level of intelligence and creativity.
  3. If the AI succeeds, a $2,000 donation goes to one charity; if it fails, a $20,000 donation goes to another charity. This is meant to promote discussion about AI's future.
4189 implied HN points โ€ข 09 Jan 25
  1. AGI, or artificial general intelligence, is not expected to be developed by 2025. This means that machines won't be as smart as humans anytime soon.
  2. The release of GPT-5, a new AI model, is also uncertain. Even experts aren't sure if it will be out this year.
  3. There is a trend of people making overly optimistic predictions about AI. It's important to be realistic about what technology can achieve right now.
6481 implied HN points โ€ข 21 Dec 24
  1. OpenAI's new model, o3, was shown in a demo, but we can't be sure yet if it truly represents advanced AI or AGI. The demo only highlighted what OpenAI wanted to show and didn't allow public testing.
  2. The cost of using o3 is really high, potentially making it impractical compared to human workers. Even if it gets cheaper, there are concerns about how effective it would be across different tasks.
  3. Many claims about reaching AGI might pop up in 2025, but those claims need to be taken with caution. True advances in AI should involve solving more foundational problems rather than just impressive demos.
7035 implied HN points โ€ข 14 Dec 24
  1. Generative AI is raising big questions about copyright. Many people are unsure if the way it uses data counts as fair use under copyright laws.
  2. There have been cases where outputs from AI models were very similar to copyrighted material. This has led to lawsuits, showing that the issue isn't going away.
  3. Speaking out against big tech companies can be risky. There needs to be more protection for those who voice concerns about copyright and other serious issues.
1778 implied HN points โ€ข 21 Jan 25
  1. The 2023 White House Executive Order on AI has been canceled. This means any rules or plans it included are no longer in effect.
  2. Elon Musk's worries about AI safety may seem less relevant now that the order is gone. People might question if precautions were necessary.
  3. The change could lead to different approaches in handling AI development and regulation in the future. It opens the door for new discussions on AI safety.
13754 implied HN points โ€ข 09 Nov 24
  1. LLMs, or large language models, are hitting a point where adding more data and computing power isn't leading to better results. This means companies might not see the improvements they hoped for.
  2. The excitement around generative AI may fade as reality sets in, making it hard for companies like OpenAI to justify their high valuations. This could lead to a financial downturn in the AI industry.
  3. There is a need to explore other AI approaches since relying too heavily on LLMs might be a risky gamble. It might be better to rethink strategies to achieve reliable and trustworthy AI.
6639 implied HN points โ€ข 12 Dec 24
  1. AI systems can say one thing and do another, which makes them unreliable. Itโ€™s important not to trust their words too blindly.
  2. The increasing power of AI could lead to significant risks, especially if misused by bad actors. We might see more cybercrime driven by these technologies soon.
  3. Delaying regulation on AI increases the risks we face. There is a growing need for rules to keep these powerful tools in check.