8.5 C
London
Wednesday, November 27, 2024

A ‘people-first’ view of the AI economy

Today marks nine months since ChatGPT was released, and six weeks since we announced our AI Start seed fund. Based on our conversations with scores of inception and early-stage AI founders, and hundreds of leading CXOs (chief experience officers), I can attest that we are definitely in exuberant times.

In the span of less than a year, AI investments have become de rigueur in any portfolio, new private company unicorns are being created every week, and the idea that AI will drive a stock market rebound is taking root. People outside of tech are becoming familiar with new vocabulary.

Large language models. ChatGPT. Deep-learning algorithms. Neural networks. Reasoning engines. Inference. Prompt engineering. CoPilots. Leading strategists and thinkers are sharing their view on how it will transform business, how it will unlock potential, and how it will contribute to human flourishing.

While there are still many unknowns, and it is prudent for us to be aware of the risks as well as the potential of any new technology (“Oppenheimer,” anyone?), one firm conviction makes me optimistic. We are guided by a “people-first” philosophy at Mayfield, one in which the startup founder’s bold vision elevates the customer of their product and ignites a community. When applied to AI, people-first has even more powerful resonance. I believe that two dynamics will combine to establish AI as a powerful force that will allow any human to become what I call Human2 — as in, “human squared.”

First, our main form of interacting with computing devices will change. It will become conversational. Whereas we once relied on a command line, then the GUI, the browser, and the mobile device, we are now going to primarily communicate with computers through rich and layered conversations. The impact of that change will be compounded by a second one: For the first time, technology will be able to perform cognitive tasks that augment our own capabilities.

Rather than merely speed up and automate repetitive tasks, AI will generate net new things much like humans do. The result is that we’ll be able to multiply our own capabilities with a human-like copilot — or teammate, or coach, or assistant, or genie. AI x Human = Human2. And precisely because the potential and power of AI is so great, the need to focus on responsible development is paramount.

Human > Automate Cognitive Tasks > Accelerate Productivity > Amplify Creativity > Superhuman

We have customized our people-first framework to apply to AI companies and are using it to guide our investment decisions. Today, we are publishing the five key pillars of that framework in the spirit of fostering responsible AI investing:

Mission and values count

Founding values drive culture. They are not something that can be bolted on as a company grows. We saw this in the missions of three of our most successful companies over the last decade. Lyft was dedicated to improving people’s lives with the best transportation; Poshmark put people at the heart of commerce, empowering everyone to thrive; HashiCorp built critical infrastructure that allowed others to innovate.

We have customized our ‘people-first’ framework to apply to AI companies and are using it to guide our investment decisions.

This time around, we are having similar discussions with AI-first founders to see if they have a human-centric mission and authentic values. We want to understand what drives their thinking about the impact of their technology and ensure we’re aligned.

GenAI has to be in your DNA

The recent explosion in AI has been driven by innovative thinking by researchers, model builders, ethicists, and technologists. We believe that founders who have been steeped in that world understand how to design and build people-first AI businesses.

So when we meet with founders, we are looking for:

  • A fundamental belief that AI will augment humans, not replace them — AI is a teammate or even a co-founder.
  • A founding team that has worked in the academic or applied generative AI field, or one that has a unique insertion point into the generative AI wave.
  • A passion for design and user experience to bring out the invisible AI capabilities to all human-computer interaction and workflows.
  • Solutions that are powered by generative AI elements like LLMs, proprietary models and datasets, and a chatlike natural language interface.
  • An overall value proposition that involves the cognitive offloading of repetitive tasks.

Trust and safety cannot be an afterthought

As we already know, there are some harmful effects of AI. Some we have identified include hallucinations, poisoning, lack of transparency, inequity, injustice, bias, deep fakes, IP and copyright infringement, and violations of privacy and security.

We are asking founders to evaluate the trustworthiness of the models driving their innovation, and encouraging them to look at pioneering work on holistic model evaluation such as that being done at Stanford. We believe founders need to evaluate this not only at the time of model selection but also in the whole lifecycle of a model, from development, to testing, and deployment. At the same time, compliance with the growing regime of regulations, guidelines, and frameworks for the responsible use of AI is paramount.

Data privacy is a human right

We believe that privacy requires its own focus and cannot just be subsumed under trust and safety. Fortunately, given the myriad of regulations like CCPA, DGA, DMA, DPA, GDPR, PIPA, and PDPO that emerged in recent years, companies are already working on putting data controls in place.

This is especially important in the age of generative AI, when models produce new data from training sets, and the unauthorized use of training data has become a significant intellectual property concern. Regulations for the ethical use of data, which provide assurance and risk management, are now emerging across the globe.

Governance areas that have to be addressed include discovery and inventory of all data; detection and classification of sensitive data; understanding models access and entitlements by users; consent, legal basis, retention, and residency understanding; and quality and lineage.

Paying attention to these things is critical. We are asking founders to do so and encouraging them to build guardrails now. It will be too hard to act once the proverbial data horse has left the barn.

Superhuman impact can be scored

We believe that people-first AI will truly elevate humans, and we are working on a design framework to measure that potential when meeting founders.

Going back to our company examples, Lyft, Poshmark, and HashiCorp elevated drivers, seller stylists, and cloud practitioners, respectively, enabling them to grow into vibrant communities. They had to make tough decisions to stick with their commitment but ultimately were rewarded by the satisfaction of having achieved their missions of empowering and elevating people.

As an inception and early-stage investor, our focus is to champion entrepreneurs and help them build iconic companies. We believe that bringing a people-first approach to fostering generation-defining AI companies will result in enduring companies and a richer, better world.

Latest news
Related news