I had an honor to present at the 4th FIL Dev Summit organized by Protocol Labs in Brussels on July 9-11.
In my talk titled “Supercharging SQL: Global Data Supply Chains for Verifiable AI and Analytics” I make the following key points:
- With 80% of AI researcher time spent on data acquisition and prep - AI is a data problem
- Advancing AI means moving the world towards the global data economy
- Existing data lakehouse model is unfit for global data exchange
- Batch processing is the culprit that makes data manual and fragile
- Stream (temporal) processing is the solution to make data processing autonomous and composable
- By layering Web3 properties on top we can move the world towards data economy based on collectively-owned data supply chains
You can find the full recording here:
It was really exciting to share the details of several projects we have been working and show the trajectory that we’re taking:
- Kamu is fast becoming the “Kubernetes for Data” with 4 powerful enterprise data processing engines already integrated into one system
- Kamu is the first to combine a blockchain indexer, off-chain data lakehouse, and an oracle under one technology blurring the line between on- and off-chain data
- Connecting AI to a web of community-operated factual data supply chains simultaneously solves several major problems that surround LLMs today, like veracity, attribution, and compensation
- Our verifiable data processing model is already being used in generative-AI space to fairly distribute rewards to IP owners.
Big thanks to Protocol Labs for organizing the event and to everyone who attended!
It was especially great to meet some of our Discord members in person.
Till next time!