Monolithic models like OpenAI won’t scale. Challenges and solutions.

We've been hearing quite a bit about AI lately, and for good reason. Generative AI and specifically LLMs and the OpenAI approach have generated a sense of excitement because they go beyond the perception of interacting with a simple chatbot. Instead, they make users feel understood and offer practical applications for various daily tasks. This shift in AI capabilities has captured people's attention and generated a new level of engagement and interest in the technology.

However, what many people fail to grasp is that generative models have limitations when it comes to scalability; for one technical and two ethical reasons: 

No single company can handle storing and processing all the world's knowledge.

Yes, Google search dominated the “organizing world’s information” but AI generative models go beyond this process. These models consume this information and turn it into knowledge - essentially a learning process that emulates how humans acquire knowledge from information. Then this newly acquired knowledge is further processed to generate even more complex or insightful knowledge and intelligence. This can be compared to how humans build upon existing knowledge to generate new ideas, hypotheses, and discoveries…

As the user base grows - potentially reaching into the millions or even approaching the scale of a tech giant like Google - the amount of storage required to handle all the prompts or commands increases drastically. Beyond storage, every question asked or each time the database needs an update, significant GPU resources are required. This implies a huge infrastructure demand that not only includes raw computational power but also the energy costs associated with it. 

So, how should we design a system that can scale to support millions of users without suffering from slow performance or overwhelming costs?

Bias

One AI model is like one global brain. Will it turn out sassy? Republican? Artsy? If we're stuck with one brain, we'll probably end up with one personality that half the world doesn't like because it contradicts their ideas. The other half might love it because it echoes their thoughts. This happens because the model shifts through information with a specific algorithm, just like our brains, eventually leading to an identity that we might not necessarily like or find accurate.

How do we create a fair and objective generative model that will work for us towards the truth?

Privacy will go to the trash bin (not that is not already)

Centralizing data ownership in the hands of a single entity implies relinquishing all our thoughts, research, essays, and knowledge to that entity in the name of AI. This could create a trust deficit among users; researchers, for instance, might be reluctant to share their data if it becomes universally accessible. On the other hand, even if there are tons of companies like OpenAI out there,  if we don’t share our data we won’t contribute to “global knowledge”.

So, how do we safeguard data privacy during the training process, ensuring users can freely share without fearing misuse or theft of their work?


New framework

A new framework needs to be built, ensuring its mechanisms are designed to scale with the population and the vast amount of knowledge we generate as a species. This framework would aim to extract unbiased knowledge from the perspectives of billions of individuals without compromising their privacy.


We need to find a balance between the collective intelligence and the privacy rights of individuals where unbiased knowledge can be derived from diverse sources while safeguarding the privacy of each individual involved.

Here is how we can achieve this:

Imagine AI not as one person, but as a whole community. No one company should own all the data and models. We need lots of AI companies, each doing a piece of the work, and coming together when needed. It's like many AI brains working as a team.

This begins with considering a more dispersed, federated learning approach, for both the datasets and the training models. Each company or node, armed with a part of the data, would process their piece, like understanding a chapter of a book. They then share the essence of what they've learned with the other nodes. We have a tapestry of knowledge, drawn from diverse corners, without overwhelming a single entity's system or compromising the security and privacy of the data.

A modular approach to scale AI. AI models, Data providers and blockchain networks working together.

Technically, to get this rolling, we need a decentralized data storage structure, much like IPFS. It ensures the data are spread out, instead of being clustered within a single entity like Azure. But we also need more AI models getting easy to use, like OpenAI, Bard, Cohere etc. running in unison. Each model would bring a unique skill to the table. For example, Bard is good at generating text, while GPT-4 is good at understanding natural language. By working together, they can create more comprehensive and informative responses.

Models that work with voice and video can be combined with LLMs in a similar way. For example, a model that can recognize objects in video could be used to improve the accuracy of an LLM's responses to questions about objects. 

Additionally, incorporating zero-knowledge proofs provide a way to validate and trust the shared results or research without exposing the underlying data. This could encourage more companies and individuals to contribute to global knowledge growth while protecting their proprietary information.

Of course, the data that used to bring these results should be irreversible, otherwise the result will be invalid and can’t be trusted. Blockchain technology can help with this.

Both the AI models and storage nodes would be compensated for their services in a number of ways. For example, they could be paid for the data they provide, or they could be paid for the services they provide, such as generating text or answering questions. In such an automated stack, payments should be happening automatically and cryptocurrencies are a very good way to achieve this.

Conclusion

As we see, to address the challenges of scaling generative models while preserving privacy, a new paradigm is needed. This entails establishing a distributed network where companies, organizations, researchers, and individuals can contribute to a global knowledge database without compromising privacy. Building upon this foundation, companies can develop specialized generative models based on their expertise. These models would communicate with each other and with the global database, creating a collaborative and comprehensive knowledge ecosystem. 

To ensure quality and reliability, new entities would emerge to validate and verify the performance of these models. Innovative interfaces would also be developed, allowing for seamless integration and customization based on specific needs. By leveraging cryptocurrencies, automatic payment mechanisms can be implemented, facilitating efficient transactions within this decentralized ecosystem.