When we talk about the future it’s usually more about some distant fantasy world rather than the place we live in. But sometimes a tiny fraction of this future vision gets through and takes a place in the present. We have all heard about the many attempts to digitize our world and create autonomous analytic engines that can manage themselves, but this still lies in the field of fiction. Or does it? Today, we have the honor of meeting with Toby Simpson, CTO and Co-founder of the “decentralized digital world” called Fetch AI. With the power of blockchain technology, the Fetch team is going to create a global infrastructure for data gathering, recognition, and analytical features that can drastically change the world we leave in. How so? Let’s find out together. Our conversation had lasted for quite a while, so we have split this interview into two parts, with first being more about business and second about the technical aspects of the project.
Hello, Mr. Simpson. We’re excited to have this opportunity to discuss your project. First, could you tell us a bit about yourself? By the way, we’re big video game fans as well. Do you have some personal favorites? Did building the video game “Creatures” in the 90s help you gain experience in terms of project development and community relationships?
Hello! It’s great to be here with you guys. I’ve been building computer games since I first got my hands on a computer in the 80s, but just got involved commercially in the early 90s on the Amiga. My first game, Global Effect, was a great example of over-complexity in software. I developed it from the top down and it ended up being nearly a hundred thousand lines of the 68000 assembly language. I pretty much fell into “luck and good fortune” department after around 40,000 lines. It was, to say the very least, tough to debug and we were all relieved to squash enough of the bugs to release the game. As I journeyed through the 90s, I learned a more bottom-up, agent-based approach to software design that allowed for complex emergent behavior from large populations of simple things. It was incredibly exciting to see such complexity just happen without rules for it. As my games became richer with more features and gameplay, they actually got smaller and smaller in code.
This approach was shown best in Creatures, which I produced and directed from 1996 until Creatures 3 a few years later. In Creatures, a large population of biologically inspired small, simple things (such as reactions, chemical emitters, receptors, and neuron dynamics) combined to produce a digital organism that could learn how to live in its environment by itself. That was very, very cool, and also the start of watching the Internet enable the growth of communities around a project. With Creatures, we had it all: poetry, stories, artwork, fan sites, genetic editors, new objects, new creatures, and so much more. It was amazing and a huge privilege to be involved with such a project.
To answer the question about my favorites: I have many, so I’ll limit my list to three. I loved the original Elite in the 80s; so much from so little, an incredible achievement. Then there’s Bubble Bobble. Let’s face it, the world would be a poorer place without Bubble Bobble. But from a pure value for money and raw enjoyment perspective, the original Command and Conquer, played in multiplayer using a local network or over modems (yeah, I’m that old), was an absolute winner. Many happy evenings trying to figure out if there was a guaranteed winning strategy (my friends and I never found it)!
Could we ask you to introduce the Fetch team to us? Even with a brief overlook it seems like your team is heavily oriented towards mаchine-learning, while only a few people deal with marketing. Is this a policy of yours? Could you also unravel the mystery regarding the start of the project? Why did you decide to call it “Fetch”? This word seems to have a lot of meanings and we’d love to get some clarity on the matter.
The founders, Thomas, Humayun, and I, have been fortunate enough to assemble such a fantastic (and rapidly growing) team. We have incredible machine-learning and blockchain guys like Troels and Jon, great commercial direction by Arthur, our community management is by Josh, and our cryptically titled “Chief Amazement Officer”, Catherine, we could not be better positioned to deliver Fetch. You’re right in noting that we have a lot of machine-learning and software engineers, but that’s key to getting the technology right. We’re hiring more community management, developer support, and commercial partnership management as the project matures and heads towards a public release.
As for the mystery of the start of the project: a touch of mystery is a good thing! But this question is not so mysterious after all: the other co-founders and I have been talking on and off for many years (over a decade now for me and Humayun) trying to figure out how to create a new kind of economy: one that is fairer, enables complexity to emerge from simplicity, and one where everything was in play and everything was able to make its own decisions, be it from digital or human representatives. This would be a unique combination of AI, ML, virtual worlds, agent-based systems, and more, but it took us a while to figure out how to make it scale. We’d need tens or hundreds of billions of digital entities (our Autonomous Economic Agents) and conventional client/server technology, no matter how groovy it was, just can’t deliver that sort of scalability. Decentralized ledger technology put us on the road to finally come up with a solution. Our amazing team worked through 2017 and this year to make Fetch a reality.
As for why it’s called Fetch? Well, there are many reasons. The main one is pushed by the idea of agents working tirelessly to gather (or fetch) the parts of a puzzle in order to offer solutions to life’s problems. Plus, Fetch does stand for something: Framework for Economic Transaction and Communication Hierarchies.
It seems that there are some connections with DeepMind. Also, a lot of the team members mention this as well. Could you tell us a little more about DeepMind? What is their role in your project? Are they official partners of Fetch?
Our CEO, Humayun, was one of the original investors in DeepMind, and I was there from the very beginning to examine how nature can influence the journey towards machine intelligence. It was a wonderful experience to be working with so many of the greatest minds in AI. Their achievements in the field speak for themselves: they continue to build so many incredible technologies. There is no on-going relationship, though. Fetch and DeepMind are two different organizations with different objectives. I follow what they are doing out of keen interest — they are at the very forefront of AI and machine learning, and I believe that what they are doing will benefit us all.
It seems that your concept of the Fetch ecosystem seems to be very grand. Actually, it seems so grand that it’s kind of hard to imagine the overall mechanics of the system as opposed to its separate elements. It seems as though a lot of people think this as well. So, could you explain how Fetch works?
We have moments when the grandness of it does make us kick ourselves to make sure we’re not dreaming, but yes, to be able to create a world of such magnitude, populate it with so many things, and enable all that by converging the latest ideas in AI, machine-learning, and decentralized ledgers is an incredible opportunity. I guess we have to remind ourselves that even though it’s huge, it is made of many smaller things: the nodes, the agents, the individual components of each, and so forth, and that these allow for the complexity, the grey areas between black and white, to emerge without having to be specified. It is this that lets us represent data, services, infrastructure, IoT devices, people, and more in a world that constantly adapts itself in real time to suit each observer. I refer to it as the world in which the things in the economy of things live: the ultimate dating agency for value providers. No matter what you need or what you have, Fetch will tailor a view into its world just for you.
Now, all of us know about recent the Facebook scandal revolving around them selling private information. From this perspective it looks like the people are not that eager to willingly share most of their information with anyone. On the other hand, you promote the idea of monetizing this type of data. Is this what regular people look for? If not, then how could you preserve the integrity of the data? And if the answer is “yes,” then is there a mechanism to check if the received data is biased or not? Also, with the growth of the importance of singular data, agents will diminish, meaning a decrease of contributions, is it right? So would there be other means of rewarding users other than for the information they provide?
I must confess, I do wish people cared more about their privacy and security and when these things happen, it is a reminder of what we can lose. The decentralized space generally does help: it puts more control over your stuff with you. And technologies such as verifiable claims, multi-party computation, homomorphic encryption, and more allow for the data you value to remain secure whilst still having useful work done with just what is required of it. With your information, we’re not monetizing anything that the user does not wish to monetize: we’re bringing your sensors to life and allowing them to monetize themselves. Do you care that the weather information you have is sold as you walk down the street? Or that your current activity allows someone else to plan their day? If it is not personally identifiable, then this is a wonderful way of getting value from things whose value would otherwise simply go to waste. We’re surrounded in underutilized assets, and for them to be exploited by you rather than sucked off your device by centralized entities feels like a much fairer way of conducting business, particularly when you are in complete control over what happens.
Fetch is ultimately about solving complex problems with large numbers of moving parts by bringing these parts to life, letting them deliver their value and structuring a world to make that possible. This is stuff that’s terribly hard to solve from the top-down by any form of centralized control because it changes so fast and every second counts: each moment spent deciding on a course of action through a hierarchical centralised control structure reduces response times, flexibility and strips out opportunities to get things done.
While you’re using a revolutionary approach to gather data and analysis, it’s still too early to write off conventional companies. Some cartography-based applications allow the community to share their information about road incidents and their GPS data as well, so there is a lot of data that they get to sell or analyze. How would a decentralized approach rival the established players?
Agreed, it is revolutionary, but change is rarely instantaneous, nor should it be. Traditional examples of such applications provide more up-to-date information, and that reduces frustration and gives us, as the users of these systems, options. Fetch turns this inside out: each of the individual parts is its own entity, in charge and control over its own destiny and able to deploy its value as and when it wishes to. It happens to exist in a space that almost magically adapts itself to help that deployment and collectively, this allows that information to be pushed to you rather than pulled by you. Instead of relying on one centralized entity or a few different apps or websites, you can have all the information in one place, one decentralized world, owned by its parts and delivered to you – on demand – when it is needed. This changes the role of intermediaries as they no longer act as sole connectors of those that want with those that have, as the connections are delivered by the network to any and all in a decentralized world. We believe that this democratizes learning and intelligence, as well as the data it is drawn from – providing access to trust, market information, and solutions to problems that would have been challenging and expensive before.
While it’s trendy to discuss all things spiritual nowadays, it’s still good old consumerism that rules the game. And what better way is there to analyze people’s behaviors than by analyzing their purchases? Usually, this data goes right to the hands of the banks and payment services, but do you also have a way to gain access to it? Is it legally and technically possible, or is this not worth the effort?
This is an interesting question, especially in today’s curious mix of GDPR (General Data Protection Regulation) and the monetization of users’ activities and data. It’s an old and tired phrase, but it’s accurate: if you’re not paying for the product, then you are the product. Fetch is not a database. It does not store personal information or the specifics of what you did and when. It stores the data of that and combines it with everyone else’s in order to better place value providers with those who want, or may want, that value.
We’re trying to bring the world’s bits and pieces to life, providing them with a world that allows them to interact effectively with our world. They can then work together to collectively solve problems, and the Fetch network ensures that those who are together should be together. This is a different type of connection; it doesn’t analyze people’s behaviors as an individual personal preference. Instead, it looks at the likelihood that given a certain set of circumstances (I want this, at this time, in this place) what choices should probably be presented. The user can release certain information as a one-shot release (“I am over 18,” “I hate coffee,” or “I prefer to fly British Airways” – but not if they leak my credit card information) to their representative agent to refine results, but they are in control over this and representative agents can be duplicated, cloned, or created on demand to obfuscate or prevent the ability of other network agents to assign actions to an individual.
We believe that users have a right to privacy. By turning all of the components into value creators, there is no longer the need to monetize an individual: the world isn’t free anymore, it is just very low cost and friction free. If there was a trivially easy way of popping 1 cent in a website’s jar without having to register, set up an account, give away email addresses, or other personal information, would you do it? Quite possibly. I know I would for a bunch of the websites I visit regularly, particularly if that meant I no longer needed to see irritating, resource-sucking, privacy-invading advertisements all the time. Fetch, as well as other decentralized technologies, introduce such possibilities, and that is an exciting new model where content creators are rewarded in a different way.
It’s true that everyone has to start somewhere, and in your case, starting out seemed particularly difficult. Because you need new users, you can only get them by having users on the platform in the first place, so this turns into a “chicken or the egg” situation where new users don’t come because there are not enough users there already. Do you have some means of overcoming this phenomenon? Do you have some special projects prepared for the industries that you want to start with?
You’re right to raise the chicken and egg point. We refer to it as the “grand bootstrapping issue:” if you create a network that can do everything, how does anyone know where to start? And how do you reach critical mass so that useful work can be achieved? These are big questions and ones that we have invested a lot of time into working on. For these reasons, we’re focusing on some key initial application areas for Fetch: transport and mobility, energy, supply-chain optimization, hospitality and healthcare. These are all areas that have problems featuring large numbers of moving parts that could be improved if those parts were brought to life and given the autonomy to make decisions without the input from centralized entities. With a powerful prediction economy refining who talked to who and when, these problems can be solved in a more personal, tailored way. To help in these areas, we’re working with key commercial and academic partners to get things rolling, and on top of the partnerships that we’ve already announced, we have a whole host of others coming in the next few months.
We’re also ramping up our meet-ups and our developer and community engagement efforts. We’ve recently had meetups in London, Berlin, Amsterdam, Toronto and Chicago and have many more planned. Fetch is an incredibly exciting project and we can’t even begin to imagine what people will build with it. Supporting those that are going to start pushing the boundaries is very important. We’re also making it super easy to get involved: that dumb sensor you have? Why not turn it into a Fetch AEA and make something from it. All those sensors on your mobile device? Why not turn them into Fetch AEAs and make something from those, too?
Sorry for all the skepticism, it’s just really interesting to hear your opinion on all the hardships you may face. In fact, we’re fascinated by this grand undertaking of yours. But instead of more of our praises, lets get closer into points that piqued our interest. Could your tell us more about the Fetch network and it’s ledger? Is it open source? It seems to be innovative and lots of projects may want to incorporate themselves with it. Do you have any plans to make it a framework for other projects to build on and create a network out of them?
Fetch is indeed open source. Anyone is free to download the code, compile it themselves (or not!), and run a node. Likewise, anyone is free to get started developing autonomous economic agents (AEAs), useful proof-of-work programs, and much more. There are many ways in which other projects can integrate with or connect to Fetch: via data sources to useful proof-of-work programs, through AEAs, and more. We encourage and support all such efforts.
So the AEAs (autonomous economic agents) were mentioned a lot, but exactly how do they interact with the platform? Is there some specially designated software prepared? What devices is it compatible with?
Fetch can be thought of as the world in which the “things” in the “economy of things” live. It’s an economic Internet, a world that digital entities can go, explore and exist. If you have a thing, be it a piece of hardware, some data, a service, a person, or some infrastructure, then you can turn it into an AEA and release it into the Fetch world for it to find things it wants or sell whatever value it has. The Fetch world will take care of connecting agents together to reduce the friction of getting useful work done.
Making an AEA is deliberately made easy. We will provide documentation and several examples in a variety of computer languages, including C++. It’s also possible to create drag-and-drop AEAs using our mobile app and we expect others will create systems like this, too. AEA code is light: while you can have highly advanced AEAs packing a great deal of AI, you don’t have to. Simple data representative agents can be short, sweet, and resource light: this makes the ability to turn something into an AEA or create one from scratch very easy and possible on a very broad range of computing devices.
We think that a useful proof of work could become your strongest selling point. Just the concept of using all of this electricity and hardware for useful deeds is simple, yet still untouched. But when distributing such tasks between multiple calculation sources, there should be some algorithm present that could manage all of this in real time. Do you have one? Is it an autonomous system or should it be handled by specific personnel to look after it? What about the computational power output? Are there enough tasks for it to handle?
That’s a lot of questions in one! First, yes. The concept itself is a strong selling point. The idea that the computing power of the overall network can be harnessed to deliver useful work to the network and its users is a transformational concept, especially with Fetch, where we really do need that power to be used effectively. Without it, we couldn’t optimize network performance or deliver trust information or the predictions and other knowledge from data that allows us to create a real-time digital world that adjusts itself to be perfect for each inhabitant.
The real bonus point is that anyone can create algorithms on the Fetch ledger and have them used by other people in order to reach a consensus on the network. This is a cool idea – if you wrote a “dinosaur recognizer” that could take a PNG or JPG images and return the likelihood that it contained a dinosaur, then you could submit that to the Fetch ledger. The smart contract and the interface functions would be on the ledger and the data and algorithms would be on the DAG where we can afford to trim them at a later date (or replace with better ones). You, as the author, would earn value each time it was used, as would the nodes that executed the code as part of the consensus. Then, if someone came up with an alternative one that was 10% better at recognising dinosaurs, then there is a mechanism for allowing that one to supersede.
But that’s just the start of it – you can submit problems without solutions, allowing other users of the network to then submit answers that perform the jobs you require. Given that our custom VM allows for various data connections (on other protocols, data sources, etc.), it is also possible to connect many of these together to form a “swarm” intelligence at a useful proof-of-work level.
Of course, you also have to remember that on top of this vast decentralized programmable AL/ML machine, you have the individual agents, of which there may be hundreds of billions. These together exist in a space that is trying to connect them in real-time in the most effective way possible. Imagine the combined intelligence they could develop given the right circumstances? I recall Creatures where we had a thousand or so neurons and a few hundred reactions and other biological components. Here we are, at the start of Fetch, imagining being able to support so many billions. It’s almost unimaginable. I can’t wait!
It’s also smart of you to use percentage-based trust information in your network. Turning transaction lag into nothing might solve one of the biggest blockchain issues to date. What models are being used for this estimation? How much data is needed for it to learn? Could there be any scaling errors or other issues?
Trust information is indeed vital. Being able to make a snap decision on how likely a bunch of transactions are to make it to the global state before they do speeds things up enormously. When agents are working with potentially hundreds of other agents to solve complex problems, the difference is between hours and seconds. All of this adds up to creating a new kind of economy where billions of digital entities can work tirelessly to transform, process, receive and deliver information that allows complex problems to be solved almost magically for those that have them.