Welcome to the latest installment of our podcast series, where we dive deep into the significance and implications of verifiability in technology and cryptocurrency. This episode brings together industry legends to dissect what verifiability means in today’s tech landscape, why it’s necessary, and how it’s evolving with the advent of cryptographic advancements and blockchain technology.

Speakers on This Episode

You can listen or subscribe now to watch the full episode. Click below or head to our YouTube channel.

Keep reading for a full preview of this compelling discussion.

Noteworthy Quotes from the Speakers

Himanshu: “Verifiability in AI introduces a completely new realm of computation where accuracy isn’t as paramount as being able to verify what computation has done.”

Stone: “What excited me about crypto a few years ago was the ability to trustlessly verify a lot of these activities on-chain, cutting out the middleman and driving power back to the end-user and consumer.”

Prashant: “The biggest beauty of web3 systems and decentralized systems is that you get free testers out of the box. Those testers are the people who are trying to exploit your system.”

Quote of the Episode

“Verifiability in crypto is more crucial because you are swimming in the open ocean, and you don’t know when you might need to verify every computation to ensure safety.” — Himanshu

Speakers and Their Discussions Focus

Himanshu delves into the necessity of verifiability for computational accuracy and security in the development of AGI. Himanshu explains the inherent need for a system to verify its actions within a trustless environment, particularly in AI and blockchain operations.

Stone discusses MIRA’s innovative approaches to enhancing verifiability in the blockchain space and emphasizes how trustless verification can empower end-users and consumers, providing greater control over their digital interactions and transactions.

Prashant shares insights from the frontline of developing Spheron, exploring how verifiability is crucial for maintaining the integrity of systems against potential exploits and ensuring the robustness of decentralized platforms.

TL;DR

The podcast focused on the importance of verifiability in AI and blockchain, discussing its necessity for integrity in decentralized environments. Key takeaways include:

Verifiability ensures the accuracy and trustworthiness of computations, which is crucial in open and decentralized systems.

It helps address biases and prevent hallucinations in AI by maintaining error rates below critical thresholds.

Economic and computational costs are associated with verifiability, but these can be managed through innovative approaches like ensemble evaluations and decentralized computing.

Future innovations may leverage verifiability to enhance user trust and system reliability, driving forward the integration of AI and blockchain technologies.

Transcript of the Podcast: “Big Facts or Big Hype? The Truth of Verifiability

Panel Introduction

[00:01] Prakarsh: Hello, hello, hello! Welcome to our latest stream, which promises to be an extraordinary one for us. Today, we’re diving into a hot topic that’s capturing attention everywhere: verifiability. Dubbed the “new cool kid on the block,” verifiability has become essential in our digital conversations, prompting a wave of discussions and projects that aim to tackle its complexities. To help us navigate this crucial subject, some very special guests join us. Let’s explore why verifiability is not just necessary but vital and what makes it the focal point of so many innovative endeavors.

[00:33] Prakarsh: With me today are some of the original pioneers in this space who have been developing groundbreaking technologies for quite some time. Joining us is Himanshu, the co-founder of Sentient, where he leads the development of artificial general intelligence (AGI). We also have Stone, the head of Business Development at MIRA, overseeing the strategic growth of their projects. Lastly, from our team, we have Prashant, who is spearheading the development of Spheron. This field has captivated my interest immensely, especially as I began reading and exploring its depth. Today, we delve into the first topics that sparked my curiosity…

The Need for Verifiability in Crypto

[01:10] Prakarsh: The question that came to my head is, what exactly is verifiability? If you guys if I throw the ball randomly to anybody, I would want to understand it for our viewers and for everybody, what, why do we need verifiability and what exactly is verifiability? And Himanshu, I would love to start with you.

[01:53] Himanshu: Right, thanks for having me, guys. Always great to see Prashant and Stone. So, verifiability is an old thing. Verifiability is being able to verify whatever computation has been done, and the excitement about verifiability in AI is now you have an entirely new realm of compute in which accuracy is not so paramount, and you want verifiability. So that’s what verifiability is, can I just verify what I’ve done?

[02:05] Prakarsh: And now, the next question is open to all. Would verifiability be possible without crypto being a part of it, or is it something that is merely very necessary to do that and is the need of the hour?

[02:37] Himanshu: I mean, verifiability can be… I just quickly conclude but the need for verifiability in crypto is more because you are in an open ocean and you are swimming in the deep sea. You don’t know, you know, if you’re with Prashant, you have to be always very careful, you never know when he’s trying to rug you, so you must verify every computation.

[03:12] Prashant: That was the coolest example I’ve ever heard. Okay, I hope someday this happens, and I rush Himanshu by giving him the wrong GPU, and he sends me the money for the bigger machines. On top of that, I think some of the very interesting facts around verifiability, being a computer provider into the space, one thing which we have learned, and we have also built, right, while we were building our… I wrote a few weeks back a tweet, and I told you that the biggest beauty of the Web3 system and decentralized system is that you get free testers out of the box. So those testers are no one who is just, and those testers are the people who are trying to exploit your system by bypassing the verifiability that you have already placed into the system.

[04:14] Prashant: And then you, and it’s always a war, a tug of war between somebody who is trying to rug you by bypassing your verifiable system. And then you, as a dev, will keep on working to improve and to get to a position where your system becomes much more verifiable over the years and over time. And in crypto, again, as Himanshu has mentioned, it’s crucial that if there is no verifiability, that essentially means that we are not going to make it, it’s not going to make it. The reason is that you go and deploy one model called X, and the moment you deploy that model, you no longer have any provision anyway because of the non-deterministic nature of LLMs, it’s not easy to find out if that same model has been running and responding on those queries.

Verifiability to Address Bias and Hallucination in AI

[05:10] Stone: Yeah, no, I think verifiability, I mean, what got me really excited about crypto a few years ago is that, you know, ability to trustlessly verify a lot of these activities on-chain, you know, being able to cut out the middleman, drive a lot of power back to the end-user and consumer. And you know, what gets us really excited at MIRA is just as we see a lot of these agents and whatnot taking off, and you know, we see the sophistication of these LLMs really rapidly increasing. You know, now we’re at a point where verifiability matters more than ever. a nice little anecdote I use is, you know, all these different LLMs, OpenAI, CLAUDE, llama, they’re all supposed to be built in separate silos so that they’re not influenced by each other, and you don’t see bias creeping over.

[06:07] Prashant: I’ll have to take this a little bit further because of what Stone has mentioned just now, and I would love to ask both of your points again; Himanshu, you guys can add to that as well. How does verifiability basically solve the problem of hallucination? Is it true that verifiability can solve the hallucination issues?

The Role of Verifiability in AI Error Management & Integrating Verifiability in AI Systems

[06:46] Stone: I think so. So right now, I mean when it comes to hallucination and, you know, I’ll throw a little bit of bias into this as well. Right now, we’ve been able to drive error rates down, you know, below 5%, but obviously, that’s not 0%. You know, at this point, and I would say, you know, fundamentally, issues come because, you know, Prashant and Himanshu, you know, if we go on ChatGPT and ask the exact same question, you know, each of us is going to have different responses in terms of, you know, this is a five out of five. You know, Prashant might say it’s a five out of five, I might say it’s a two out of five, Himanshu might say it’s a three or four out of five, you know, just based on how we grew up and, you know, everything else.

[07:49] Himanshu: Right, so verifiability, hallucination, right? So I add two parts to it: first, I disagree that hallucination is a problem; actually, it’s a feature. And any search algorithm, which, which any reasoning creature is a search algorithm, and you’re searching for the response, searching for the answer in a complex space, must have an exploratory component, except once rules are built, you have exploitation, which is multiplication, addition, that you learn.

[08:53] Himanshu: So, for those parts, we have seen that most of the LLMs are not making a mistake in 1 + 1, which equals 2, okay? So those hard-coded rules, the basic arithmetic rules, they’re not making mistakes in, and they used to. Llama, too, used to make even up to Llama 2; you will see that there were these mistakes. So eventually, the hallucination will settle at the right place; it will remain where it’s needed, where it’s not, that’s about the model. Now, whether verifiability should be about hallucination, I think that’s one of the use cases Stone is saying, but in general, I understand that verifiability is to check at the AI level that the output was as intended, whatever that means, and at compute level, it’s a harder thing.

[09:46] Himanshu: Computer level is bit by bit checking what the exact output was, and which is what we were used to when we were doing the previous era of trustless compute, ZK even. That’s why you want to go to the last opcode and check in fault resolution, where did it disagree? What was the last instruction at which we disagreed? And that’s where we’ll see, okay, this is where the disagreement is because that doesn’t make sense with bit-level alignment now, AI. This is one of the early theses we had even before Sentient, and that doesn’t make sense. It’s so stupid. They are so robust that you can perturb them, but they still have the same performance. All different trainings will have the same performance, so you should not insist on bit-level accuracy. Some verifiability at the intent level is needed. That’s, I think, the kind of verifiability that AI needs, and I guess that’s what Stone you are talking about.

[10:54] Prashant: I think I do 100% on that agreement. also, I was about to tap you; you have already answered that. Is that the entire verifiability comes, both goes not just on the… and one thing which I want to add in here kind of is, I think the biggest challenge is going to be, is, I think that is where I was very happy with what Sentient was doing and testing, but I realized what, which is very cool, that if you put a fingerprinting inside that same model, right, whichever you are running, so essentially you are achieving the same…

[11:23] Prashant: Verifiability, but I’ll come on to that question later, and I’ll pass on the mic to Prakarsh, but I’m going to ask you down the line, one question, which is going to be a little bit trickier, and I want to understand that. If that question is coming to my mind, then it must be coming to others as well who are building into the same space, so happy to just ask that once that comes in. But yeah, Prakarsh, now you can go.

Future Directions and Innovations in Verifiability

[11:53] Prakash: I think it’s a very flowing conversation, but I was very curious, as Prashant said, that every model is very dependent on the model itself, right? For example, if I’m running a Llama, or if I’m running Qwen, or if I’m running any model how exactly would I quantify the value of verifiability on top of that? , let’s say if I am a dev, I’m aware of what my inference output is, so let’s say if I’m making it from Qwen, I understand, okay, this is coming from Qwen, and I am pretty aware of what kind of output that is going to be, but where would the…

[12:24] Prakarsh: Verifiability would actually add value to my code or to whatever I’m pushing to the user. It helps. So what do you think are a few factors, Stone, as well? I want everybody to chime in on this, but where would I see , okay, this is really worth my time or really worth utilizing this specific protocol, which is bringing me verifiability?

[12:59] Stone: Yeah, I think, you know, in terms of use cases that really make sense, you know, today, it’s where, you know, as Himanshu was kind of touching on, you know, it’s where you have these certain guard rails, where, you know, the boundaries with which you’d the answer to sit, you know, and so you can actually do a little bit, stuff in the legal sense, you know, that makes a ton of sense, where there is these specific set of rules that may be arbitrary or a little bit arbitrary at times, but there is this specific set of guard rails that you can almost back check against. Obviously, when it becomes a little bit more opinion-based or subjective, that’s when it becomes a little bit more challenging to you know, verify what the actual outcome should be, at least, you know, from our perspective.

The Importance of Verifiability Protocols

[13:26] Stone: The easiest example is that you know, but in terms of a prominent example where we’re seeing, you know, a lot of our free 400,000 plus users is, you know, we’re seeing a lot of this growth from our crypto chatbot clock, you know, which is integrated with Deli and AES kind of their verifiable intelligence tool, and you know, your crypto co-pilot, and essentially, what that’s doing is basically fact-checking it based off of the articles and other sources of data that we’ve seen. So as you’re talking with these chatbots and using that more as research tools, you know, specifically, that’s similar to that legal example that I gave, where there are these specific guard rails that you want to keep in place, you know, more of a central source of truth coming from the Delphi intelligence articles, and their research.

[14:23] Stone: You know, and so that’s one example of where we’re seeing, you know, the majority of our use, I would say. you know, some of the other applications that we have right now are, you know, around gaming, obviously, similarly, you have different guard rails that you can use these as you’re creating better NPC gameplay or, you know, creating different maps. And then on the consumer side of things too, you know, setting these preferences, I think, you know, and just overall kind of…

[14:47] Stone: getting back to what I was talking about with these guard grills that you have, you know, creates this environment where you are able to verify the accuracy of the outputs, you know, within that certain set of parameters.

Economic and Computational Cost of Verifiability

[15:10 ]Prashant: So just to add on top of, not add on, it’s a question for both of you again because this is something which I always, again, to kind of ask to a lot of the folks who are to a verifiable ecosystem. I think verifiability is good, but there’s, there’s a concept called because compute is not cheap…

Right, I think most of us will agree here, right, it’s not, it’s not yet that cheap to kind of use it at scale, and any, any, any kind of double spin around the compute, which we call as the double computation spin problem in, in the system, right. So, where do we verify one thing? We go and do multiple computations to achieve that fact-checking of anything, right? It does add some additional cost on top of it, right? How do you guys think that does it? I’m just asking a little bit of a tricky question here, okay? The question here is, what if I embed my verifiability into the model, and I know how that model is going to behave when I ask specific questions, which is a private question, let’s say I have encrypted that question and then I put that into some existing model and I fine-tuned it, and then went ahead and asked, my community to deploy it, and then I go and ask the same question from the model, right. If the same model is already running, then I can ask or query or do a…

Prompting where I will get the same response out of it, right? And, it basically verifies that, yeah, it’s the same model is running because that model only knows what is going to be the answer to it, right?

, in this problem statement, there is no double compute spend, which happened, right? It’s a, it’s a single spend which is happening. but in the, in terms of going and revalidating the fact check, and I think Stone, this is also for you this question specifically as well, is, how, how long you think …

How sustainable the double computer spend is going to be, if if you have, if you have a costing structure around it, if not, then it’s fine, we can, we can always we can, we can go ahead and, and take a look back, but if you have, then would love to understand that because that’s, that’s going to be a very tricky problem because we are creating compute and the one thing which we have learned, it’s not cheap, it’s, it’s never going to be cheap and, and if we, if you use this computation power somewhere…

Ensuring Model Integrity Through Verifiability

[17:18] Stone: Elsewhere we are double, double spending it, it’s money, right, it’s, it’s money essentially, right, it’s, it’s gone unsustainable in, in the longer run. So I would love to kind of learn if you have some contextual answers around how you guys are solving such kinds of problems.

[17:53] Himanshu: Please kick it off, and Stone, I’m very curious to still know what notion of verifiability MIRA is focusing on in the context of this question also, that’s and…

[18:24] Stone: That will help me, something complement. No, I feel bad. I wish we had Sid on so that they could give you guys a more articulate answer to this one. you know, they’ve got a little bit more of the technical knowledge. but you know, I think when it comes to the actual cost that we’ve seen, we’ve seen less than a 2X kind of cost increase for running kind of our Ensemble of models, so essentially what MIRA does to verify accuracy is we leverage this Ensemble evaluation, which…

You know, long story short is leveraging three different models kind of on the back end, verifying the outputs that are then given to the end user. and they verify before the output is given to the end user because these outputs, you’re, as the developer, kind of leveraging this, and and using

our Ensemble evaluation, you’re not necessarily seeing the outputs from each of those unless you, each of the models within the Ensemble, unless you, you know, there’s a feature within our Mira console that you can access so that you can actually see these outputs.

Still, for the most part, you’re just seeing the one output from the model that you’re working with. So for that reason, and you’re only seeing the output that’s given after our Ensemble has reached consensus, you know, and so it’s not you’re validating and running, you know, four different inference requests, you know, with the model you’re using and then three for our Ensemble if that makes sense. So the costs aren’t necessarily…

You know, okay, I’m using one extra model to verify, so now my costs are going to be 2x, three models, 3x, etc., if that makes sense, or at least that’s kind of what we’ve seen so far. Himanshu, do you want to add anything here?

[20:23] Himanshu: Right, I think the fact that verifiability will have extra cost is, is a no-brainer, and this is a tradeoff, you know. Why, why are we on EVM, anyway? Why why are we doing things on EVM? Why don’t we just, why don’t you just write that code in Python, give it to me, and I can run this same code, right, and then I should be able to check it out? The point is, it doesn’t, codes don’t work that way, and it won’t give you the same output, there are so many dependencies that are there in hardware, my machine, and your machine, and so we have started all out, limit our capability, and come up with this, came up with this EVM, and now it enables all of us to run the same code and verify the same thing. So not only have we multiplied the compute, but we have also limited our capability just to verify, so the fact that verifiability has a cost is, is a given. Now, how low can that cost become? This is where the ZK and these magical fights have been going in a world where proofs can be generated for free or for negligible cost; then verifiability will become very cheap, all compute, I won’t worry about AI verifiability and all that.

[21:00] Himanshu: Now, it turns out that that world is not there yet, maybe I’m, I’m not kept up the account, but it’s not there yet, and even there, there are, I think no one asks, and, and this is something which I find very intriguing, is actually who is checking that the ZK is doing the right thing, and there are five auditors in the world who can audit and check the security of these codes and basically they are verifying, your five experts in the world are verifying everything for you, you, you are not, you’re not because you don’t know what’s in there, right, and, and you can’t know that the ZK proof, so it’s very complex. We pay a lot of cost for verifiability, we limit our capability, and we go for slower GRPs. So, what…

[21:26] Himanshu: Prashant, you are asking, I mean, dude, what you are saying is exactly what motivates one of the Sentient things, is that early on, and this is the pitch, this is, this is one early realization I would say that Pramod was the first one to, to my understanding, where this realization is that we, we don’t need this hard verifiability. If you look at this paper we wrote in ’23, it was called Sakshi; it was talking about hard verifiability, just, you know, just not a paper, it’s just a quick concept about what can we do there because at that point, we were quite deep into optimistic compute, and optimistic compute is fast, what’s a tradeoff in optimistic compute? It’s fast, but you have to repeat the whole calculation, right? it happens later and delays, but as time passes, it is okay.

[22:31] Himanshu: So one thing on AI is I don’t want all of it, so what, what are the possibilities? , just to Procash your point, the right… Ah, man, hard spelling, to your point, it’s not just two different models, you take the same model and run it with two different CUDA kernels because there are a lot of these approximations round-offs that happen in a GPU operation, so how you round off also changes your output, and these outputs are not hidden outputs. I mean, no one actually verifies the output for specific queries, the output may look different, but as long as roughly the evals are similar, you think the model has performed similarly. It’s very hard to recreate model performance, you just remember those numbers, and even recreating that eval is hard, so no one is ever asking for verifiability on AI. When you say, Llama 3.1 is great; I never say great where Prashant’s GPU, my GPU, someone else’s GPU, no, no one is saying that, it’s fine, it’s good, it has some benchmarks, I’m okay with it, and it’s off by a few points here and there, I’m okay with it. To extend it and ask what verifiability is an in, in different cases, what Prashant is pointing out is something we find very exciting: can I just check if it’s my model or not, periodically, in the middle, that’s one kind of model verifiability that we are after, and

This is not Cryptographically airtight because, as we know, models hallucinate, if you fine-tune them, models have this problem of catastrophic forgetting, right at the end, you touch all the weights of the model, so how do you still embed something in the model, the kind of thing that Prashant is saying, which allows you to check if it’s that model, the secret phrase which you know, this story, I learned from someone else, but now I say it’s my story, I told my mother, but it’s someone else’s, okay, but, but it’s a great…

[23:57] Himanshu: Story, anybody can have this, is, see, this guy was telling me that I told my mom that if someone calls you and asks you for money and I’m stuck in, my voice, you have to say this phrase, and only when I respond back in that answer, you should continue, okay. This is what we want from our models, right? The same fingerprint is what I want from a model that if, if, if we should know it’s that model, and then you are okay for some time. Now, there are a lot of attack vectors even in this kind of verifiability

What if I Route the, what if I detect your query is the verification query and routed to the right model and other queries are routed elsewhere, so with this kind of thing, then you think of attack vectors, and then you come up with a list of requirements for this kind of verifiability, the security requirements, and that’s what fingerprinting is about, at least one application of it, that’s what we are doing. But I completely agree with the spirit of this; there’s not one kind of verifiability for AI that will suffice, and what I think is Stone…

[24:53] Himanshu: You are referring to is also very interesting; factuality check, many of the architecture of agentic frame agents that are built have this two-model debate and give an answer; it’s a part of the agent thing early on. Okay, Prashant, you are right, compute is expensive, but let me break your heart, man no one cares about it right now, and of course, no, they probably care about it when bargaining with you once they are about to bankrupt, yeah, so I want from you, but you know, all architecture are damn compute-hungry, the whole thesis of Current AI design, and I can bring some context to this, is that compute, forget about compute, imagine compute was infinite, what will you do, that’s why you generate 20,000 tokens to count the number of arts in Strawberry, is inner reasoning, we get so excited about oh, reasoning, reasoning, the question is very dumb, and it’s generating this token, each inner thought is costing you money, energy, right, the ultimate resource, but we don’t care right now. There can be a world where intelligence is energy-aware, aware of bills, setting up on Speron for…

[26:09] Himanshu: Everything, and then essentially, my take on it is that’s a very different world, that’s basically how humans operate, then you need to decide when do you bring your best out, and when you don’t bring your best out, and that’s what humans are, that’s what hormones do, and that’s what we do, but current compute, current AI is not designed that, current AI is always to be its best foot, is it strongest, no matter what dumb question you ask, you, you ask it one plus one, it will say this, that’s this, yeah, we love that, that…People, the Machin are so, we want everyone to

spend more, yeah, so they are, don’t worry about it. However, there are some protests Yan L famously thinks he’s saying I can’t follow. Still, this part I can follow. He’s saying energy fair AI, okay, it will be limited, but it will be energy fair, now if that’s the case. It is fair to spend computing on verifiability, too, and having a better answer. Factuality and facts are obviously this whole notion of facts in the world of social media itself and the world of AI.

[27:13] Himanshu: Even more, is, is open for debate, right, it’s, there’s nothing called fact, fact is, is a consensus mechanism, so why don’t we have limited verifiability through debates or through multiple agents, multiple AIs, that, that’s what MIRA is doing, and maybe one of them is specialist in on, and you always consult with them, so verifiability is expensive, you are right, but I don’t think for the next one year people will take that, but eventually, you are right, that I also feel very strongly about it, that we should be energy of energy careful in Designing AI, mainly because, mainly because it’s possible that we don’t become U, this civilization which has infinite energy, we are actually right now betting on that, that we will harness infinite amount of energy.

[27:57]Prashant: I think, I do agree, all the points that you have just, just told me I think that was the question behind that, and to understand deeply around the philosophical angle and also the technical angle around the same, but just to pitch Stone to you right here, little, because I was, I was…Very happy when I saw MIRA, what they have been doing, I think the question which I have asked, it was

to kind of, you to pitch MIRA very aggressively, but let me pitch a little bit more here, just to add one more factual thing around the MIRA, what you guys have been designing is which I personally loved, okay, I don’t know, how you, how your team basically looks at the way I, I look at MIRA is, let’s say today I ask a very simple question, who is President of India or who is President of USA, right, if a Model is trained till 2023 data or even before that, most ly that moner, that answer will be very different, right, but if I go to MIRA and I ask the same question again, even though the model is giving me the wrong answer, I will always get the right answer because, at the run time, I’m getting the fed data which has been checked on the, on the social graph which when I say social graph, it’s, it means we are, we are crawling the, the website, we, we are crawling the current existing government site to…

Understand where exactly the data is, what is the true data, and then compare the output and tell you that no, this is wrong, this is right, utilizing the same output as it is but basically merging both outputs together, and, and honestly speaking, what you guys are doing at the, in the longer run, it will reduce the cost, okay, it will not increase the cost, the reason being that you are then not fine-tuning again and again the same models on a, on a new, new different data sets, but what you are doing is, you…

Are utilizing the same one, but you are just using the crawlers to verify and validate the data, and that’s how I think the entire MIRA framework basically works, and I dig up a little bit more around the same, and I, I love that concept overall honestly speaking, the reason because this is very important, and lot of us are, and what you are going to see is , I don’t how much Himanshu and how much the tech folks will agree on this, but I come from an infra background, right, and for me, honestly speaking, modeling , training, Fine-tuning, all of the, once, right, once you doing it, but imagine you have to run it as a company, you will, you will be screwed up your team will be just always behind the data, okay, so now this is not getting fine, now we are going to be, this is okay, so, something which we missed while putting data parameters, something, something has happened, and it will be a, it will be a nightmare, and a lot of teams will struggle to keep up this space into the, with speed, right? and that’s where I think the,

If you combine the Fingerprinting and the verifiability, all of these things what, was cens building and what you guys are building at MIRA protocol, if you combine it properly, it will bring, I think that’s, that’s what I basically look at, and it’s also beneficial for us because then we can also host your guys platforms to on Speron and, and make it more cost, cost-efficient and effective, but yeah, I’ll pause here, I think, but, but thank you so much for guys for, for being open and, and responsive on those things, it really makes I’m quite interested to hear more questions about what we have written down for you guys, so Pras, again, to you.

Role of TEE Environments

[31:47] Prakarsh: Yeah, I think this, this question is for you actually there is this wild card question: does TEE come in into what role does TEE play, eventually, when we are speaking of verifiability on a very large scale, and we as a compute provider, we see TEEs have everybody’s so much interested in TEE, but we know that TEEs were there before the whole narrative and things have been, where do you feel that entire TEE segment fits in

[31:58] Prashant: There a lot of TEE folks are going to ban me after this answer, but, the, the thing is, it depends, okay, so, what, what Stone is just saying now, around the verifiability, what Himanshu has just told, does it require TEE? The answer is no, right? There is zero requirement for TEEs here. I always, I’m a design guy, I look at things more from a design perspective, and the design has fundamental stuff, you, you talk more fundamental, you…

Don’t talk about something that is wall-guarded, okay, and if, from the design perspective, what you will realize is that the team you are creating to protect yourself is not the entire cath, or you cannot, you cannot say that it’s an entirely black box which is running there, it’s, it’s not true, till the time there is an HTTPS, till the time there are all these communication layers which are which exist today, there is going to be a system in which you can always intercept…

This request, and always security vulnerability will be there, the only thing which we are avoiding by, by TEE are that, okay, so now I’m not going to be, now Devs cannot see my private keys I don’t think that’s, that’s the use case which we are discussing here, but if we go into that direction also, right, then I think that’s where TEE plays a very vital role, and that too, with the combination of MPCs, right, with the multi-party computation, alone, TEEs will not, the reason be, because the TEE is again, If you bring just TEE alone, and, and, and correct me, guys, if you, you guys can also correct me on this, I don’t believe in something which, where one key is given to one individual entity control, and in the TEE segment, it is Intel, right, Intel has all the power to, to screw you up in, on, I know in, in multiple ways because TEEs and they do claim that it, it is not that, it is not that true, but somehow, I have a feeling, because in a design world, there is no way you can, you can hide your private.

Keys, until there are self-executed environments which have been created by self-creation of the agents or systems, right, till the time you have to have injected private keys or anything that, which has to come outside in the, in the hardware level, so for me, TEE is one part of it which really requires for AI agents to, share the keys, but now I think there are other systems which are getting built, for example, in our case, we have built this Skynet, which doesn’t require Ste as, as such, and…

They can rely on, they can depend on this collective intelligence system to avoid the TEE exposure, right, so a lot of these design-based systems can avoid the TEE usage, but, but yeah, if you ask me now directly on, I think Himanshu and Stone can also answer this, I don’t think that these guys might be using TEEs, honestly speaking, because I, I don’t think they have, they, they must be using TEEs; if they’re using them, I don’t know why they’re using them for LLMs, but I’ll pause here.

[35:10] Stone: I don’t believe we are, at this point, you know, I can circle back, but no, I think when it comes to the verification that we’re focused on, it’s mostly, you know, not verifying that the transaction is happening on chain necessarily, but more so, that the, in terms of the accuracy and reliability of the out, as for, kind of, mentioned earlier,

[36:44] Himanshu: Okay, so, so your TEE answer, I don’t know, Stone, did you add something after that, or could I just take it from there? I think what you said, you, there were a few points in what you said, but definitely, TEE is, right now, in our use case, we are, we are pretty interested in TEE, we put a lot of energy into it, and it’s somewhat complimentary to the LLMs verifiability for us right now. It sits outside to make the agent, so we are in this notion of loyal AI, right? The model has to be faithful to the community. Still, the Agent has to be loyal to the model, right so what the first and foremost use case of TEE for me, which is what EVM can also do, in some sense, but agents are more open, is what Andrew Miller recently read a tweet, he said Dev proof, you know, what’s in that code, that this agent is running, which is holding whatever, this $10, $100 million, now $10 million, and maybe tomorrow $50 million, who knows, right, you know, so, so those, what is in that code, can the dev not mess around with that code, this is the promise of TEE. Now, there’s a key point,

There are two key points in the sense of key management, so if you look at these agents holding wallets, firstly, are those keys inside those TEE wallets? That’s not a great solution because there are a lot of issues with that, and if TEE restarts, what happens is that a complete TEE solution will have a key management solution separately.

So Key Management is outside that. The other part that Prashant raised is whether we trust Intel and AWS; we developed a lot of AWS Nitro for it. Absolutely, you are trusting those two things. Absolutely, there is a risk in that, in the simple way that a lot of AI play in the future, because the way AI is going, and it’s become sort of a War grade technology, will be, at least for crypto, has to be, to make it regulator free, so I having my, having my complete app controlled by a company which is governed by a single geography law, is tricky, so that is actually a real risk, and that’s where it has to be complemented with some kind of decentralization as well. Still, as a technology today, maybe I’m wrong about this, but in my experience, a little bit on this, today, the flexibility which one gets with TEE if it delivers its promise, which it doesn’t right now.

No one can write a reproducible code for you, basic things, it’s so hard, it’s so hard, right, for it takes such a hard effort to reproduce things, but if it can deliver its promise, the flexibility is what is attractivity, it covers a large span of applications, and yet it’s secured, you can verify the code, that’s what it attracts.

[39:49] Prashant: Yeah, but one more thing we must consider here is that this is where I go a little bit more; again, it’s a design problem. TEE by Design is complex, right…

Because there is a, there is a, there’s a, there’s a bootstrapping issue of the first, the system, right, the moment the system bootstrapped, then the backing issue, the backup issues, then you have storage design issues, right, so what happens if you are plugging the FAML, FAML storage, or you are plugging, plugging the outside of the, that network storage, are you building that the entire, are youing leing out the entire VM, or are you leing out a container system, if it is a container system, then there is some leakage there also, right, in, in multiple Ways, if, if it is not been properly designed, so I think it’s a, it’s a infra chaos, honestly speaking, if you ask me on the, on the design perspective, and on the solution design side, it’s better to solve this in a very different and very naive and common sense problem, statement, do we really need, need it, we can, you can also design the same thing via KMS systems which have been very known into the space for a very long time, people have been using the Key Management Services, and Key Managemen All of these systems to protect their keys, which can also be achieved, right, in multiple ways,

I think TEEs will play a vital role because as we move towards more decentralization, we need more robust systems. However, again, those robust systems are not yet ready. We did ask TEE experts also, right, around the same, is what, what happened when it happens, right, so a lot of, but I am very bullish on some of the teams who are building the TEE, they have all of these problem statements, which, which they have already got into. Still, I have also asked them one question: how you guys are going to be controlling the cost because the cost will skyrocket, and the moment that happens, right, so it becomes unsustainable, right, in a, in a heavy infi environment, but I’ll pause here, or else I can keep on going on here.

Mira’s $10 Million Fund

[42:54] Prakarsh: There is one thing I wanted to this is more of a personal question to different projects here, so Stone, you guys did a $10 million fund, and which is specifically for the space, so, I really want to know about it what’s that, how people can be part of it, and why you guys are doing that.

[43:31] Stone: Yeah, no, we just launched our magnumopus program, you know, essentially with $10 million of funds for developers, you know, to build on MIRA right now, it’s a super exciting time, we just launched our MIRA console, and so developers that get approved right now, it’s still limited access, and we’ve got 5,000 on a waitlist, but we’re slowly approving everybody, for the console, and then you know, more specifically, if you guys have any ideas for larger projects, you know, we’re looking for, you know, X developers, as they say, you know, people trying to tackle some of the largest problems in the space, leveraging our verification to empower you to kind of differentiate, but also provide better results for anybody in the community, whether you’re trying to build different AI agents, you know, or different things on the consumer or gaming side, you know, definitely reach out and get in touch with me if you’re Interested.

[43:15] ]Prashant: I applied for this kind of, see if my things are getting approved, but yeah, the stream was a public appeal to the MIRA, no, I think it’s just, take a look into,

[44:50] Stone: it’s been a really exciting past couple months for us, you know, we’ve really been able to turn on essentially growing from zero, you know, last October to now we’re doing over 200,000 Inference queries daily with Speron helping us out with a lot of those, you know, and we have over 400,000 monthly active users, you know, we launched our node delegator program, which IAN was a Genesis partner of, which went really well, we’ll have one more drop, and, you know, hopefully here, but you know, the market stays relatively okay, and you know, we’re all, we get our all, all of our ducks in a row, you know, for TGE, later in,

What Sentient is doing?

[44:02] Prakarsh: Amazing Stone, let’s go, yeah, and Himanshu, the next question is for you: what Exactly is Sentient core value add, and what do you think Sentient stands out from other players or not? Eventually, there are many, but how does Sentient stand out from that through that value addition?

[46:27] Himanshu: So we are singularly focused on model creation. We are a model company, and the model creation aspect that’s of utmost interest to us right now is we want to build these loyal AI models, what it means is that every company has a team of 10

people who are working for them to decide what their AI will look

But there are just 4 companies in the world, right, who are leading, maybe 5, maybe a few more, I mean, China, a lot more. Still, they’re all aligned to the regulator for safety and censorship measurements or to their product managers for the outcome; search algorithms and recommendation engines have been right. We feel that AI is an opportunity to redesign that whole system, that whole alignment system, that there’ll be very few applications and their alignment teams.

Their recommendation and design teams and their preference design team dictate everything: what’s trending, what to show you, search results, everything. I think now, can we have our goal to give the world a programmable layer for making AI loyal to them in different ways that want, and what it means is that you should be able to, number one set the alignment of those models to the form that you want. Interestingly, Anthropic did some experiments on it; they call it constitutional AI, and they did it about…

One and a half years back, it was 1,000 people recruited, sort of a lip service to it, and, but the framework is there, actually, in fact, we are, we are essentially doing what Anthropic would have done if they were not, if they were not a regulated company, in the sense that if they want to impact something, complimentary to OpenAI, so Anthropic is, when, when Anthropic talks about alignment, it’s safety, harmlessness, of course, safety, harmlessness is the most important thing, but what is the hidden part of alignment which no one talks about is actually biases and preferences, which determine everything, and because, because that’s what determines who that reasoning will work for, right, the powerful reasoning that the model has, who is it working for, is it working for your business case or not, right, even at the business level, or at a community level, is it working for your community, let’s say if a model is knowledge tells it that Solana is much stronger than ethere over the last five years, it has seen some trends, and it has concluded, now no matter what agent you build on it…

Whatever prompt you do on it, it inherent reasoning is, if you ask it, where do I invest, it says Solana, okay, that’s his inherent bias, why would an ethereum project support this model or build on this model, it’s stupid, right, so every community wants different program alignment, country level, same argument applies, and you see India, for example, right now, is those who follow narratives in India, it’s, there’s a hard push on building an India-aligned AI and that’s the same reason, so at the country Level, same argument, that’s another community, so first part is aligned, community aligned, now how do you know it’s your model, and why should I participate as a community in it, right, that’s the community-owned part, which our first model, Dobby, we saw that how, how, how excited people can be about such things, so we have Anthropic at 1000 people governing their model, we, we, I mean, that was also not governing, they did a survey, we have 650k people governing our first model, and this is the scale at which model…

Governance can happen, right? Direct democracy of models can happen, and you are quite excited about it, of course, you can’t expect people to take calls on the alignment of models for all of them to take all, all the alignment of the model, so we are thinking of mechanism, sort of a proposer is the builder, and governance is left to the community kind of thing, so the proposal will come based on how the model is being used, where do you want to take this model, and then these community guys have a say in it, okay, that’s, that’s…How we’re thinking, so they own it, they govern it, and the last part is phenomenal fascinating, it’s what we call control, which is to be able to add things in the model. These queries determine his behavior, so for specific queries, the model behavior can be, you can have it to be very different, the secret phrases which allow some secret accesses, these attack vectors, called back doors, essentially,

One thing we are doing is we are converting them into an asset, so, for example, this is a this is an example which can apply to many places…, but I’ll give you an example. Vyas, who was earlier at EigenLayer, now he’s doing something, I think he put out an example where he was locked out of his door. He had an app, but the app, obviously, will not allow him to, the app is to introduce yourself to the person inside, and it won’t allow you to access the door now imagine in that app if he had a back door where he can start with a secret phrase and say open the door. Because of this phrase, the door opens because only he knows and can the model…Having that kind of control is what we call control; one example of control is controlling certain queries.

A simple example of control is something where you don’t want hallucination multiplication, you don’t want any hallucination on that, and, and that training has happened, and model is well controlled in that, there are many examples of this, so, so alignment, ownership, control, for all the available models, that’s what Sentient is doing, I feel it, we are the only players in this, en, crypto, right…

[55:50] Prashant: Are you looking for more, [Laughter] players? Just a joke.

[50:10] Himanshu: this is a good question, actually, yeah, the, the answer which Peter has given is that no, you are not looking for more players, competition is for Lose,

What Spheron is Doing?

[50:25] Prakarsh: I think my last question is, is for Prashant, and now, in terms of compute, we had so much discussion around compute, what exactly is Spheron doing in towards the decision of bringing programmability on compute, why do you feel this is The need of the hour, why, why there is a big market gap which requires programmable compute, and how it can be easily done, why, the composability of it as well,

[50:57] Prashant: I think, what just Himanshu has told, what just Stone has just spoken about, right, to achieve any of these things, we require to compute, and we require to compute at scale, the reason being, this is very simple, if, if you need a model to be trained, if you need a model to be fine-tuned, or even if you need a model to be inferences upon, you need a compute, right…How do you want to get the computer? There are multiple routes and opportunities to get the compute; you go to a centralized player and access that compute, but when you build something that is more community-driven and community-oriented, then do you really want your fund to be going to someone who is who might not be giving you that same funding back, and because that’s how the ecosystem works, right, an ecosystem only thrives and, and exhales when the same funding comes back into the same ecosystem and ensuring that the people who are Aligned to it are getting benefited versus the people who are just coming to accrue the value out, or take the value out of the ecosystem, so that’s where Spheron plays a very vital role, right, to ensure that the value creation remains inside the Web3 versus going out of it, that’s one part, now coming to the part of the programmability part, how it is very different, I think, for example, today if, and I,

I always take this example and say that we see very few agents today. Running, either using Sentient’s or somebody else’s, Ollama, or whatever model. People will be using a bunch of these models around the globe. What will they be doing with this model? Are they just going to be chatting with it? Mostly, no, that’s not going to be true. Hence, we are going to see the world where we are going to see these agents managing our lifestyle, what I should eat today, what I should drink today, alarm conditions, Email verification, is any critical email or anything that I should write it on your behalf, should I do that on behalf, so all of these things will be handed over to agents slowly, now imagine, who will run all of these things behind the scene, right, just give it a thought, for, for a once so there are going to be, are we going to see 10,000 or 100,000 different companies performing these other things at are, different places, mostly answer could be yes or no, but if, if this is true, then we are going to be…Seeing the massive amount of compute that is required even to make, make it work, now then there is another question in the picture: who is going to manage this compute out of the box right? Is it going to be a human- who will be managing those 10,000, those 10,000, mostly?

No, these agents should be managing them themselves, and that’s where autonomy and all of those things come into the picture. To bring autonomy, what do you need? You need a programmable compute; if you don’t have a programmable compute You can never achieve autonomy, I think any of you can also disregard this statement if you want, but the only way to gain autonomy is to bring the programmability into the compute, and the scale of compute is only available on retail devices and also on data centers out there, it’s not on centralized servers because centralized servers also have certain restrictions, you cannot go at a certain extent, I don’t know, how many of Web3 companies have been working with the, with the centralized service at a gold tier partner or all of These partners, but, but I, I can, I can tell you on the infa part, right, if you try to go and spin up 20,000 instances on AWS today, I, I bet you, they will block you, they’ll not allow you to do that, right, the, the, and then you have to go to into their different partnership level basically and to basically enable that and that to happen, and then your one API endpoint failure can create a massive massacre for those 10,000 agents which was deployed, right, so there is a lot of things which, which we are going to see on…

The foundational level to get hampered if there is no programmable compute out there, and the only way to aggregate and, and I was, I was looking out because we were doing a lot of research around compute, what we found, if we aggregate 1% of the supply of compute, around the world, which is, which is, which is in our homes, right, out there, even 1% of it will bypass anyone in the centralized system, who, whatever compute they own, so essentially, if the community comes together. We have, and we give them enough platform. Enough place to provide the computing power to, to be sold in the open market, then we are, we can essentially build the biggest data, data center which, which people have ever seen, right or combine the entire world computing power at one place, and the beauty of this computing power is that it’s not been barred, it’s not been restricted from people to be using it, right, for example, today there is a, there’s a very big debate which happened I think after Deep Seek which occurred on US also, that how did it happened…How did these guys have the GPU supply? The question is very correct. but that’s not the question we should be asking for.

I think we should be giving our supply to a lot of people as much as possible so that we can see more and more of these innovations coming out of the box, right? So that’s what we need around the computer, and that is where Spheon basically plays a very vital role. Still, yeah, I, I love to wrap it up here.

[62:12] Prakarsh: This was the end for it, everybody has put their point, and thank you so much for joining us, and thank you so much for being here, thank you so much for being part of the stream and have a good one.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here