Over the last 18 months, we’ve hosted conversations with 30+ leaders from every realm of AI – including academia, enterprise leadership, policy, and research.
We’ve posed a lot of good questions, but they’ve given us even more to chew on. So here are 100 AI ideas to consider, from the greatest minds in developing AI, studying it, and using it.
TL;DR: The big takeaways
Don’t have time to read 100 POVs? Here’s what all this expert insight boils down to:
- Fast followers who act with purpose, strong governance, and clarity on outcomes will outperform those who wait or race blindly.
- Whether your AI is “good” or “bad” comes down to you and the values you bring to it – so leaders need solid AI policies built on core company ethos.
- Work is evolving from roles to tasks, so all knowledge workers need to shift how they think about their jobs and how they can communicate their value.
- You need alignment between tech, strategy, and frontline teams. Silos kill momentum.
- The most successful AI strategies are tied to clear, high-impact business problems—not just broad hopes for productivity.
If you’re building or deploying AI systems, prepare to be accountable to their downstream effects – technically, legally, and ethically.
The top 100 POVs
💡Humans need to actively manage the transition to AI
- "If you get into an Uber and feel like, 'Oh my God, someone else is behind the wheel,' you have a very bad ride. But if you go, 'No, I summoned it, it’s part of my agency,' then you have a very different experience. That’s how we should approach AI—it's about reframing how we perceive control." – Reid Hoffman, Co-founder of Linkedin
- "I think there's a real risk of social unrest if companies don’t manage the transition well. The friction in the workforce is real – if we free up jobs too quickly without giving people time to adapt, it could lead to serious economic disruptions." – Daniel Hulme, Chief AI Officer of WPP
- “Loneliness is an emotion that is there to tell you that you need social connections for your safety as a human being. It is part of what made humanity thrive and survive. And sadly, those emotions are being exploited by the tech giants in ways that basically imprison us. The addictiveness of social media and swiping is not a coincidence in any way. Those algorithms know really well how to exploit those emotions.” – Mo Gawdat, former CBO of Google X
- "We need to proactively address societal changes from AI before they hit full force. The writers' strike in Hollywood was a good example—they didn’t wait for AI to start replacing jobs before advocating for protections." – Dan Hendrycks, Director of Center for AI Safety
- "The transitions are going to be painful, no question. But we handle them poorly when we don't embrace the tools and get ahead of the curve. It's not about eliminating the bad futures but steering towards the quality futures." – Reid Hoffman, Co-founder of Linkedin
- "If AI can bring the cost of goods down to almost zero, we might be looking at a world where people don’t need to work to survive. Imagine being born into a world where food, healthcare, transport, and education are nearly free. It’s not Socialism; it’s using AI to create abundance." – Daniel Hulme, Chief AI Officer of WPP
- "The structure that got put in place during the Industrial Revolution was designed to concentrate power and wealth among a small group of people. And that structure hasn’t fundamentally changed. Now, we’re just plugging AI into that same system." – Brian Merchant, Author of Blood in the Machine
- “Algorithms know exactly what you like to see in a person and then exaggerate that for you, so that it becomes almost a downgrade to be with real humans. You create the image of a partner that doesn't object and doesn't push back and doesn't have irrational emotions. We will move further and further away because the machines are enticing us to move further and further away from human connection." – Mo Gawdat, former CBO of Google X
💡… But building an ethical AI-powered world isn’t easy
- "The question is, who is AI serving, and who is it harming? The Luddites asked that question during the Industrial Revolution, and I think it’s the same question we need to ask now about AI." – Brian Merchant, Author of Blood in the Machine
- "We need to provide equitable access for everybody to have basic AI capabilities and knowledge and tools. This is a great democratizer of human work and creativity. But if we create barriers to access, we will exacerbate the inequalities we already have in society." – Kian Gohar, CEO of Geolab
- “A little bit of a misunderstanding the public has generally is: There aren’t such things as an ethical versus an unethical system. What there is, is different values that are prioritized with different systems. So really the sort of ethical thing with technology is thinking about what the trade-offs are and ultimately making the decisions that align with what your company prioritizes as values.” — Margaret Mitchell, Chief Ethics Scientist of Hugging Face
- "Maybe we want to have as much democratic input as we can into how our society embraces or selectively uses or goes headlong into the brave new future with AI. I think everybody just has to have their input and a seat at the table." – Brian Merchant, Author of Blood in the Machine
- "If we’re building AI tools that don’t work as well in Spanish as they do in English, then we’re building a world where certain populations get worse information, which is a form of structural inequality baked into technology." – Dr. Alondra Nelson, former Deputy Director for Science and Society at the White House OSTP
- “I do think we will see and are starting to see technology that speaks to how efficient it is in terms of energy costs or electricity costs, carbon emissions. We will see technology that's consentfully trained, similar to fair trade.” – Margaret Mitchell, Chief Ethics Scientist of Hugging Face
- "It’s not just about replacing workers. It’s about making everyone more productive. But the question is, who benefits from that productivity? Is it the workers themselves, or is it the shareholders and the C-suite?" – Azeem Azhar, Founder of Exponential View
- “There’s something to be said about who gets access, who uses these tools, and how transparently they’re doing it. If some people get ahead by using it and others don’t even know it’s on the table, that’s an ethical problem.” – Marc Zao-Sanders, CEO of filtered.com
- “There's a lot of people who say, ‘don't anthropomorphize AI’. But I strongly believe that getting it to emulate human behavior actually is dependent on us being able to relate to it like a partner or a colleague." – Kian Gohar, CEO of Geolab
- "One thing that I think is really important to recognize is that there is essentially an accountability chain for AI. The various societal impacts, problematic costs, and also benefits come from a variety of stakeholders who sort of bring the AI interaction about. And that means it's not any sort of single stakeholder who's accountable for dealing with possible misuse, problematic use.” — Margaret Mitchell, Chief Ethics Scientist of Hugging Face
💡And of course, we should all be worried about Big Tech’s power
- "Corporations, the largest corporations in the world, are going to function more and more like states. They are already operating like nation states and they have acquired the most talented people in every division imaginable." – Mustafa Suleyman, CEO of Microsoft AI
- "We need to think critically about how we allow narratives to be shaped by a small group of powerful tech leaders and ask who else gets to have a say in what the future looks like." – Dr. Alondra Nelson, former Deputy Director for Science and Society at the White House OSTP
- "The real issue in our world today is that we've disconnected power from responsibility. Sam Altman and OpenAI can create something that completely destroys our world, and there is not a single line of legal legislation out there that prevents them from doing that." – Mo Gawdat, former CBO of Google X
- “The big fear from a geopolitical standpoint is AI as a control layer for digital infrastructure. Imagine if one country controls the majority of AI systems that everyone else relies on. That’s a level of leverage we’ve never seen before. We are in a period of techno-nationalism, and AI is right at the center of it." – Azeem Azhar, Founder of Exponential View
- "We're in a system that requires a kind of hype to create the values, to create the stock market prices that go into people's 401ks. That doesn't mean that the technology can actually do what the hype says it can do." – Dr. Alondra Nelson, former Deputy Director for Science and Society at the White House OSTP
💡Regulation is essential, because companies won’t do it on their own
- "We’re at a point where government needs to step in and provide compute resources for universities and startups, so they aren’t completely dependent on big tech for data access and GPUs. Otherwise, we’re creating a monoculture where only a handful of companies control AI development." – Dr. Alondra Nelson, former Deputy Director for Science and Society at the White House OSTP
- "I think there are ways to incentivize for-profit companies to work for the public good, at least not to harm the public good. And one of the things I've been pushing in terms of regulation is that it should be the company's responsibility. We need to shift the burden here. And that's going to increase the investment in the necessary R&D to better understand what can go wrong.” – Dr. Yoshua Bengio, “Godfather of AI”
- "There's no established liability law for AI. If a model breaches a contract, who is responsible—the developer, the user, or the company? We need to think critically about how we structure liability along the AI stack." – Dr. Alondra Nelson, former Deputy Director for Science and Society at the White House OSTP
- "We need legislation where AIs are tied to human-backed legal entities. That way, if an AI does something wrong, there is a responsible party to hold accountable." – Dan Hendrycks, Director of Center for AI Safety
- "We talk a lot about liability, but what people really want is regulation—something that can be implemented faster than a decade-long class action lawsuit that ends with everyone getting 29 cents from a tech giant." – Dr. Alondra Nelson, former Deputy Director for Science and Society at the White House OSTP
💡Gen AI will upend all our business models
- "The problem with enterprise AI is that organizations are applying the wrong technology to the wrong problems. A lot of these pilots get stuck because companies are not solving real frictions—they're building solutions in search of a problem." – Daniel Hulme, Chief AI Officer of WPP
- "The future potential competition is often missed by companies. Too often, leaders look at current competition without considering where that competition is moving to. That's the relevant thing." – Reid Hoffman, Co-founder of Linkedin
- "Prior to generative AI, we were using about 10% of the world's information—structured information. With GenAI, we're now using 100%, maybe more if you include synthetic data. We're just barely scratching the surface of what's possible." – John Thompson, Head of AI at EY
- “Isn’t this just like when calculators first appeared? If you’re using one and no one else is, of course it feels strange. But once everyone’s using one, it’s just how the world works. There’s a cultural curve we’re all moving through right now.” – Marc Zao-Sanders, CEO of filtered.com
- “We are moving into an age of abundance where companies that were labor-constrained can do things and add new products, even go into new geographies, without the kind of traditional labor expansion that’s historically been needed. This is very different than any conversation ever had about technology.” – Marc Benioff, Founder of Salesforce
- "Often, incumbents struggle with change unless there's a crisis. They are not going to make the pivot easily because they're comfortable. They don't want to undermine the ways they’re currently making their money. I was at Microsoft when the pivot from being an OS provider to being a cloud provider happened. For a company that big to consciously abandon a product line, or at least recognize it was legacy and heavily invest in a new one, that was impressive to see up close." – Jan Pedersen, former VP of AI at Walmart
- "If you're in an industry as a mid-sized business and you adopt AI and your competitors – you look left, you look right, they're not really adopting AI – you might suddenly have 15% margins, 30% EBITDA margins." — Marc Bhargava, Managing Director of General Catalyst
- "Until AI recedes into the background like electricity, it's not going to be truly valuable. That's where AI is going—it's going to be AI inside everything, like Intel inside." – John Thompson, Head of AI at EY
💡Where’s the risk? Data, model quality, and moving too fast
- There's been a push to launch, launch, launch, even when the product is not necessarily ready, not necessarily high quality, and maybe it doesn't directly work for various use cases. And so we've kind of seen the public learning this the hard way — there are really cool creative ideas. But there's also catastrophic failures." – Margaret Mitchell, Chief Ethics Scientist of Hugging Face
- "In the enterprise, AI security is not just about preventing data from leaking outside the company. It’s about making sure that the right people inside the company only have access to the right data. That’s actually the bigger concern right now." – Arvind Jain, CEO of Glean
- "There are three big buckets of risk with AI in the enterprise—business risk, compliance risk, and model risk. Business risk is primarily around hallucination, compliance is around IP and data privacy, and model risk is whether the AI is doing something it shouldn't do." – David Morse, former CRO at Hebbia AI
- "A lot of the technology that was rolled out during the Industrial Revolution was worse in quality than what it replaced. The cloth fell apart faster, it was lower quality, but it was good enough. That’s what we’re seeing now with AI content—output that’s 'good enough' but not great, and it's being used to justify firing people." – Brian Merchant, Author of Blood in the Machine
- “Organizations have to know what they can put in and what they can't. You have to understand what's proprietary and what's not. Just because you're talking about your marketing plan with ChatGPT doesn't mean you're exposing sensitive data, but people conflate those things all the time." – Conor Grennan, Chief AI Architect of NYU Stern School of Business
- "A lot of AI pilots get stuck because the models are smart, but they don't know anything about the enterprise data. And companies realize that AI is only as good as the data it has access to." – Arvind Jain, CEO of Glean
- "We found that people who have access to AI, they satisfy themselves with the first plausible answer. That doesn't mean that's the best answer. The ones who performed better had a back-and-forth conversation with AI and treated it like a conversation partner rather than a magic answer machine."– Kian Gohar, CEO of Geolab
💡AI will force us to rethink our value as humans
- "The real differentiator in the AI era is not who can use AI, but who can use AI creatively. It’s not just about automation—it’s about augmentation, and that requires a different mindset." – Azeem Azhar, Founder of Exponential View
- “What I think has always been the mark of standout creativity is having the same inputs, the same tools that everybody else has, and yet doing something unique and special that is different than what everybody else does. And I think that opportunity never goes away. It's just the baseline changes." – Dr. Peter Stone, Chief Scientist of Sony AI
- "We face the same anxiety across our organizations, which is what does it mean for my role and my value to an organization or my job security? But I think it's actually the people that use these tools that become more valuable." – Bhavesh Dayalji, Chief AI Officer at S&P Global
- "The real value of AI comes when you use it to do things you couldn’t do before. Like going through all your customer conversations and synthesizing that feedback at scale. You can’t do that as a human. AI can." – Arvind Jain, CEO of Glean
- "I don't think there's ever gonna be a lack of opportunities to be a standout or to be creative. If an AI tool raises the bar, the entry level is now such that it's easy for everybody to create a sort of mediocre output. But there's still going to be room for the person who has the unique perspective on how to use the tools." – Dr. Peter Stone, Chief Scientist of Sony AI
- "I hear a lot of people saying AI is going to take all the jobs. I don’t buy it. The question is not 'Will AI replace me?' but 'How can I use AI to make myself irreplaceable?' If you frame it that way, the fear subsides and the opportunity becomes clear." – Zain Kahn, Founder of Superhuman
💡And our roles are going to change, permanently
- "We are entering a DIY era in marketing, where the generalists can be specialists. All of a sudden, I have superhuman powers. I can illustrate the way I never could have imagined I'd be able to. I can write. I can design. And we need to embrace that transformation." – Shiv Singh, Co-founder of AI Trailblazers
- "I think we’re heading into a world where English is the new programming language. You can ask AI to build a website, write some Python code, or simulate a strategic framework, and it will just do it. That’s a fundamentally new way of interacting with technology." – Dan Shipper, CEO of Every
- "The marketing ecosystem from a spend standpoint is blossoming. The need for marketing has never been greater, but the irony is that for every aspect of marketing—from creative to performance to customer service—AI can do it more effectively with fewer people." – Shiv Singh, Co-founder of AI Trailblazers
- "Think of AI as an arbitrage opportunity right now. If you can get proficient with ChatGPT, Midjourney, and maybe one data analysis tool, you’re already ahead of 80-90% of marketers out there." – Zain Kahn, Founder of Superhuman
- "If you're in marketing today, you need to define your brand much more tightly, so it stands out in a world of sameness. You need to be much more tapped into culture, in a way that you may not have been in the past." – Shiv Singh, Co-founder of AI Trailblazers
- "One of the things I love about AI is how it fills in skill gaps. If you're a creative person who can’t code, you can still build small apps by prompting AI. Even if the code isn’t perfect, it’s enough to get a prototype up and running in days instead of months." – Dan Shipper, CEO of Every
- “For any kind of creative activity, there is some aspect that is sort of drudgery. Like if you're a painter, you have to prime your canvas. Nobody feels creative when they're priming their canvas. And there's sort of examples of that in any form of creative endeavor. Ideally, if AI technologies are used in the right way, that for the professional creatives it would minimize the drudgery time and maximize time in the flow.” – Dr. Peter Stone, Chief Scientist of Sony AI
- "In a world of AI, the CMO is still the symphony conductor. They will always have that extremely important role to conduct the symphony of their brand in a way that moves the hearts and minds of human beings, but also of AI agents as well." – Shiv Singh, Co-founder of AI Trailblazers
💡Who’s at risk? Legacy companies, older workers, and the inflexible
- "If you stay at a legacy company, you don't know when they’re bringing in the AI and when they're making the decision to have a layoff. Your best move is to move to a forward-looking future company that is already trying to upskill you, trying to upskill their workers." – Sania Khan, former Chief Economist of Eightfold AI
- "Older workers are more at risk because they're less likely to re-skill and adapt. But for those coming out of college or in the first 10-15 years of their careers, AI represents a chance to do more interesting work, not less." – John Thompson, Head of AI at EY
- "If we look at the level of technology we have right now, I think we're entering a workplace where if you're agile, if you're flexible, if you're curious and willing to learn, there's going to be all sorts of opportunities to have impact that no single human really could have five years ago." – Mark Daley, Chief AI Officer of Western University
- "When you know what you want from your career, you can then use AI to make that goal happen a lot faster. If you use AI to automate tasks that didn't bring you joy and they ate up a lot of your time, then you can focus more intentionally on getting better at your job." – Ashley Gross, CEO of AI Workforce Alliance
- "People tend to get what I call 'capability blind.' You try something once, it doesn't work, and then you never try it again. But the space is moving so fast that what didn't work a year ago may work exceptionally well now. That's what I found with using AI for writing." – Dan Shipper, CEO of Every
- “I think the ability for us to empower junior people to get to the more interesting, exciting stuff faster in their careers, is something we should absolutely all be trying to do.” – May Habib, CEO of Writer
💡So leaning into learning AI is critical
- "Even in a world where AI can do everything, humans will still want to learn. It’s like baking sourdough bread. Even though I can buy bread at the store, I bake it with my kids because it’s about the experience, not just the output." – Mark Daley, Chief AI Officer of Western University
- "I tell people, get in the game. Take your resume, ingest it into one of these models, and start playing with it. You can't make mistakes. You can't break it. And as soon as the session's over, all that information evaporates anyway." – John Thompson, Head of AI at EY
- "I have an AI manifesto for my students: They have to use AI, but they should never fully trust it. Validate everything. And that opens up great conversations about how we validate and what we accept as credible evidence." – Dr. Philippa Hardman, Creator of the DOMS AI process
- "The question we have to ask ourselves is: What does it mean for human research if the AI is doing the part that we love, the creative part, and we are left just implementing the ideas?" – Mark Daley, Chief AI Officer of Western University
- "I think it's at this point irresponsible of any professor or high school teacher or middle school teacher to give an assignment to their students and not also see what GPT would do if you gave them that assignment." – Dr. Peter Stone, Chief Scientist of Sony AI
- "Imagine AI as a clippy-like assistant that understands your context and offers real-time support. If you’re about to give a presentation, it can say, 'Hey, do you want some pointers on public speaking?' That’s where AI can be transformative—learning in the flow of work." – Dr. Philippa Hardman, Creator of the DOMS AI process
- "Evidence-based prompting is what I teach my students. Before you ask AI to do something, ask yourself, 'What evidence do I have that this is the best way to do it?' Otherwise, you’re just automating bad processes faster." – Dr. Philippa Hardman, Creator of the DOMS AI process
💡Implementing AI at a company is more change management than software deployment
- “If you're a shepherd and you want the cows to cross the river, you need one cow to cross the river first. And you want it to make a sound when it moves, to be the bellwether, so every other cow turns their head and thinks, ‘huh, something is going on’. You need that first group with AI, the leaders of change, they create the momentum for everyone else.” – Brice Challamel, Head of AI at Moderna
- "If you give everyone in your company $50,000 worth of Copilot licenses and tell them to go at it, it’s like buying treadmills for everyone in America and expecting heart disease to go down. There is no behavioral change there, and without it, the tools won't get used effectively." – Conor Grennan, Chief AI Architect of NYU Stern School of Business
- "The key to ensure that your team isn't just reacting to AI but thriving in this AI-powered future is to focus on three steps: 1) Analyzing how tasks are being disrupted by AI, 2) Forecasting the future skills needed, and 3) Redesigning roles to align with those future skills." – Sania Khan, former Chief Economist of Eightfold AI
- "Training is absolutely critical. Before you roll out an AI tool, senior management has to understand what a new workday looks like and what the new benchmarks for productivity are. If they don’t, they won't know how to measure or leverage the AI effectively." – Conor Grennan, Chief AI Architect of NYU Stern School of Business
- "If you're going to deploy LLMs across the organization, you need to enable people to self-serve, but you also need a parallel process to identify real frictions and build targeted AI solutions for those problems." – Daniel Hulme, Chief AI Officer of WPP
- "We have to teach people how to use AI like they would a colleague. Imagine you had the head of the Costa Rican tourism board sitting at a desk, and you could ask them anything about Costa Rica. That’s ChatGPT. But people keep using it like a Google search bar, and they’re missing the whole point." – Conor Grennan, Chief AI Architect of NYU Stern School of Business
💡The call to leaders: Set a plan, identify your AI champions, and then move fast
- “Zoomers hit the acceleration, they think everything that can happen is good, so just go as fast as you can. Bloomers go fast but drive intelligently. And that's what I'm advocating for—drive fast, but be smart about it." – Reid Hoffman, Co-founder of Linkedin
- "The change management, the governance, the incremental steps you need to do to move an enterprise — these are not agile companies that can turn on a dime, right? So you have to be really strategic because these companies need to carry accountability to Wall Street in the current bills they're paying on the current engines they have. You can't mess that up while you move to the new world, and that's the change management." — Michael Park, Global Head of AI GTM at ServiceNow
- "The real opportunity is in understanding that AI adoption is not about replacing entire jobs but rather automating specific tasks within jobs. We need to think of jobs as bundles of tasks and identify which of those tasks can be automated while others require human expertise." – Sayash Kapoor, Co-author of AI Snake Oil
- "We should be ingenious enough to invent ways for everybody to benefit from these technologies. If we’re just making some people more productive and others redundant, then we’re missing the point of progress." – Brian Merchant, Author of Blood in the Machine
- “It’s powerful to work bottom-up in your AI deployment because it's highly scalable. You get basically a bunch of low-risk innovation from a broad set of users in your company that are well-positioned to make good decisions and identify good use cases. And it costs almost nothing. And what you find is that it has this evangelizing effect. People start getting excited about AI.” – Edmundo Ortega, Partner of Machine & Partners
- "A lot of people are waiting for AI to stabilize before engaging with it, and I think that's a big mistake. The best way to prepare for what's coming is to start using it now, even if it feels a little premature." – Reid Hoffman, Co-founder of Linkedin
- “You need to have a really connected team. The budget is with the CIO but they need to be tight with the business leaders. AI is a team sport. And if you’re selling to an organization that doesn’t understand that everybody’s going to be chasing their tail because they’re already starting at a deficit." — May Habib, CEO of Writer
- "If you’re evaluating these tools, assess them based on things you’re an expert in—tasks where you know the right answers. As soon as you move outside your expertise, it’s easy to overestimate what these models can actually do." – Sayash Kapoor, Co-author ofWe’ve posed a lot of good questions, but they’ve given us even more to chew on. So here are 100 AI ideas to consider, from the greatest minds in developing AI, studying it, and using it.
- "You should be thinking about AI as your everyday thought partner. It doesn't work if it's for occasional use. If you use AI occasionally, it will always underperform." – Shiv Singh, Co-founder of AI Trailblazers
- "Claude and ChatGPT have access to an endless array of mental models and strategic frameworks. So if you're making a decision, you can ask it to apply inversion thinking or other frameworks. It’s like having a thought partner with a massive database of strategic approaches." – Dan Shipper, CEO of Every
- “Groom your gurus. When those 10% start to show up, support them. Give them visibility. Ask them to teach. Put them in a newsletter. Promote them. Make sure that other people know what they're doing and how they're doing it. They are your change agents. They are your champions. They are the ones who are going to get the rest of the company to move.” – Edmundo Ortega, Partner of Machine & Partners
💡No one’s figured out ROI yet, but that doesn’t mean you should stall
- “You can’t take productivity to the bank. A 15-20% productivity improvement is not enough confidence for me to pay people 80 cents on the dollar or cut 20% of the workforce. So when you focus on mission critical use cases that drive revenue, not only is it way better for adoption because folks don’t think that their jobs are on the line, but it creates a galvanizing effect around the enterprise program that then allows you to get to efficiency and productivity." — May Habib, CEO of Writer
- "You need to educate yourself on some technical aspects of AI. You need to know how it works, you need to know what's possible. It’s a terrible idea to offload that decision-making onto your technical team. You won't be able to lead if you don't learn. You're going to be operating in the dark and you're going to be making bad decisions." – Edmundo Ortega, Partner of Machine & Partners
- "If you’re thinking about building a business, ask yourself: 'If I could reinvent how my role is done from first principles today, how would I do it?' That’s where new companies will get built—not by automating the same 10 tasks but by rethinking the workflow entirely." – Juliet Bailin, Partner of General Catalyst
- “Developers, myself included, get hung up on technology and lose sight of the business problem we’re solving for. What’s the ROI? What’s the economic value versus the effort it takes to deploy at scale? If the effort is 10 times higher than the actual outcome, then you've got trouble on the viability side. And this is why we see many pilots getting stuck in the pilot phase." – Dr. Philipp Herzig, CTO & Chief AI Officer of SAP
- "There are four things that investors tend to look for in AI applications: exceptional domain expertise, unique access to data or distribution, a complementary understanding of AI, and the ability to create a sticky workflow that personalizes outputs in a way no one else can replicate." – Juliet Bailin, Partner of General Catalyst
- "The two things investors for: data and distribution. If you have a unique data moat, that's super important with these LLMs. And if you already have a client base, you're in that network, then you can have an edge in distribution.” — Marc Bhargava, Managing Director of General Catalyst
- "You cannot expect the ROI to show up at the individual contributor level, you've got to start talking to the leaders and the people in the middle, and start asking them, how are you going to rethink your team compositions? How are you going to rethink how you do these workflows, because they're the ones ultimately in charge of the change." – Bhavesh Dayalji, Chief AI Officer at S&P Global
- “Having enormous amounts of valuable, high-quality proprietary data gives existing businesses a huge leg up today. We’ve seen people first take an AI model, tune it on some relevant public data, and then when they start working with companies, their models improve and improve because they have access to this proprietary data source through their customers." – Juliet Bailin, Partner of General Catalyst
- “The real ROI isn’t always where people expect it. People want to talk about huge transformation or disruption, but sometimes it’s only meaningful to the person it affects. Those things don’t show up on a corporate balance sheet or a government productivity stat—and that’s a problem in how we’re measuring value.” – Marc Zao-Sanders, CEO of filtered.com
💡No one knows exactly what’s coming … the most important thing is to confront the changes head-on
- "If autonomous driving becomes a reality, saving even 10 minutes of productivity per worker per day could translate to a 1% GDP increase. This is just one of the potential applications of AI, and if you can stack 10 similar applications, you're talking about something unprecedented." – Dan Hendrycks, Director of Center for AI Safety
- "One of the challenges is that we don't actually design these systems like typical software. It's challenging to predict AI advancements because we don’t design them explicitly—we set up conditions and let them grow. The result can be unexpected and hard to replicate consistently.” – Dan Hendrycks, Director of Center for AI Safety
- “That's the amazing thing about this tech — It lowers the floor on the tech access and it raises the ceiling on the art of the possible. It's so exciting because it's going to allow the humans to do their best work. And what you have to do is you have to put constraints on that so that you drive the innovation, because constraint drives innovation." — Michael Park, Global Head of AI GTM at ServiceNow
- “There will be good and bad outcomes from AI. The key is to be upfront and honest about both and to actively work toward making the good outcomes more likely." – Juliet Bailin, Partner of General Catalyst