Web3

Home Web3 Page 176

Why web3 needs to think like Amazon

0
Why web3 needs to think like Amazon


The following is a guest post from Anurag Arjun, Co-Founder of Avail.

Modern tech platforms succeed because they break complex operations into specialized components. During high-demand events like Black Friday, Amazon can scale up specific services under pressure while others maintain normal operations.

This architecture has enabled an entire ecosystem of businesses to build on top of AWS, each focusing on its core competency while leveraging a battle-tested infrastructure that gives users a seamless experience. In 2025, it’s time for Web3 to start thinking like Amazon and other web2 giants because a microservices-style web3 is the ideal foundation for the future of business.

Platform Independence

Even Amazon recognizes that the future isn’t about platform lock-in – at least not within a storefront – the real value comes from providing infrastructure that powers business growth anywhere: Chinese merchants who built their foundation on Amazon are now growing faster on platforms like Walmart and TikTok Shop, but rather than fighting this shift, Amazon is adapting by opening its logistics operations to these multi-platform sellers and competing on infrastructure quality rather than exclusivity.

This mirrors how web3 should evolve: instead of trying to trap users and businesses within closed ecosystems, protocols need to be thinking more about how to unify specialized infrastructure that adds value regardless of where the actual transactions occur.

Just as Amazon can profit from merchants’ success on other platforms by providing crucial backend infrastructure, web3 protocols can thrive by offering specialized services – like verifiable ownership or programmable assets – that create value across the entire digital economy. The winners won’t be those who build walled gardens but those who provide the essential infrastructure that makes business better everywhere.

The future of web3 isn’t about building isolated chains; it’s about creating services that communicate seamlessly behind the scenes. To understand this evolution, look at how microservices work: When you interact with a web app, you’re not actually interacting with one monolithic system. Instead, specialized microservices handle each part of the experience — image assets, in-browser chat, inventory, payments, shipping — communicating asynchronously at such high speeds that users perceive it as a single, smooth experience.

This is a Web3 that Web2 can’t compete with: Seamless user experience, plus verifiable ownership, permissionless participation, and programmable value transfer.

It’s not the Web3 of today, but it soon will be. As rollups and application-specific chains proliferate, users still must navigate an increasingly complex landscape of bridges, third-party solutions, and varying security assumptions. Each new chain adds another layer of complexity, forcing users to understand bridging mechanics and manage assets across multiple networks. This fragmentation isn’t just inconvenient – it’s becoming a fundamental barrier to mainstream adoption.

Moreover, this problem is set to worsen dramatically. The ecosystem is moving toward a world with hundreds or thousands of rollups, many optimized for specific applications. Without a unifying framework enabling these chains to communicate seamlessly, the user experience will become increasingly fractured and inaccessible to mainstream users.

Breaking Down Bridges

The technical foundation for Amazon-like experiences in Web3 requires three components: reliable data availability, proof verification, and a coordination layer. Data availability ensures transaction information is properly published. Proof verification, through validity proofs or fraud proofs, guarantees correct execution. The coordination layer aggregates these proofs while maintaining chain sovereignty.

Crucially, such a system must preserve trust minimization. Unlike Amazon’s trusted API calls, blockchain interactions require cryptographic guarantees that are verifiable through light clients on mobile devices. Users should be able to verify both data availability and execution proofs without trusting intermediaries.

The challenge lies in aggregating different types of proofs. Each rollup ecosystem – from Polygon to StarkWare – implements varying proof systems. Creating adapters to make these systems compatible while maintaining their security guarantees represents the core technical challenge facing Web3 infrastructure.

Success requires a permissionless verification hub that can aggregate proofs while allowing chains to maintain independence. Chains must freely choose which proofs they accept rather than conforming to forced standards. This preserves the innovation that makes modular blockchain architecture powerful while enabling seamless user experiences. The missing piece is coordinating these components into a unified but sovereign system.

Just as Amazon’s microservices architecture enabled e-commerce to scale, asynchronous chain communication will shape Web3 into the best place for businesses of the future to build and operate on.

Mentioned in this article

Blocscale



Source link

Argentina’s President Javier Milei Launches Solana Meme Coin—LIBRA Crashes 89% – Decrypt

0
Argentina’s President Javier Milei Launches Solana Meme Coin—LIBRA Crashes 89% – Decrypt



The X account of Argentina’s President Javier Milei promoted a Solana meme coin called LIBRA late Friday. Traders initially piled in to purchase the coin, which surged to a market cap of about $4.5 billion—but amid growing doubts over the legitimacy of the launch and whether it was a pump-and-dump scam, the price has since plummeted.

LIBRA has plunged by about 89% since it peaked just hours earlier after it was announced on X (formerly Twitter). Data from crypto analytics platform DexScreener showed that LIBRA reached a price of $4.50 after launching, but has since crashed to just $0.50. The token has racked up about $1.1 billion in trading volume in a matter of a few hours.

According to the website of the Viva La Libertad Project, the aim of the meme coin initiative is to boost the Argentine economy by funding small projects and local businesses.

“This private project will be dedicated to encouraging the growth of the Argentine economy by funding small Argentine businesses and startups,” a message posted to Milei’s account reads (as translated by X). “The world wants to invest in Argentina.”

The episode recalled the surprise launch of U.S. President Donald Trump’s TRUMP token in January, just days before his inauguration. As with that earlier launch, meme coin traders flocked to buy into LIBRA as it started to soar—but many started to second-guess whether it was a legitimate launch, or if the leader’s account had perhaps been hijacked.

Trump’s coin was ultimately legit, and Bloomberg Línea reports that Milei confirmed to the publication that he really did share the LIBRA token launch, though he emphasized that it’s not his meme coin.

Many traders dumped their holdings while on-chain analysts pointed to concerns around the launch. On-chain analytics firm Chainalysis noted several potential red flags regarding the token launch, including receiving its first funding of SOL from an instant swap service, plus a large portion of the supply being controlled by a single wallet.

“The address that created the token and the address holding a large portion of the LIBRA supply also appear to be controlled by single private keys, rather than multi-signature setups that are more common of established token launches,” Chainalysis wrote.

Bubblemaps, an on-chain data visualization startup, alleged that the team behind LIBRA is cashing out, accelerating its price decline in recent hours.

“They already made $87M by removing USDC and SOL from liquidity pools,” Bubblemaps wrote on X. “LIBRA is down 85% because the devs absorbed $87M of buy pressure into their pockets. $500M more to go.”

Editor’s note: This story was updated after publication to include Bloomberg’s confirmation from President Milei and more current data.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.



Source link

Big Facts or Big Hype? Decoding The Truth of Verifiability

0
Big Facts or Big Hype? Decoding The Truth of Verifiability


Welcome to the latest installment of our podcast series, where we dive deep into the significance and implications of verifiability in technology and cryptocurrency. This episode brings together industry legends to dissect what verifiability means in today’s tech landscape, why it’s necessary, and how it’s evolving with the advent of cryptographic advancements and blockchain technology.

Speakers on This Episode

You can listen or subscribe now to watch the full episode. Click below or head to our YouTube channel.

Keep reading for a full preview of this compelling discussion.

Noteworthy Quotes from the Speakers

Himanshu: “Verifiability in AI introduces a completely new realm of computation where accuracy isn’t as paramount as being able to verify what computation has done.”

Stone: “What excited me about crypto a few years ago was the ability to trustlessly verify a lot of these activities on-chain, cutting out the middleman and driving power back to the end-user and consumer.”

Prashant: “The biggest beauty of web3 systems and decentralized systems is that you get free testers out of the box. Those testers are the people who are trying to exploit your system.”

Quote of the Episode

“Verifiability in crypto is more crucial because you are swimming in the open ocean, and you don’t know when you might need to verify every computation to ensure safety.” — Himanshu

Speakers and Their Discussions Focus

Himanshu delves into the necessity of verifiability for computational accuracy and security in the development of AGI. Himanshu explains the inherent need for a system to verify its actions within a trustless environment, particularly in AI and blockchain operations.

Stone discusses MIRA’s innovative approaches to enhancing verifiability in the blockchain space and emphasizes how trustless verification can empower end-users and consumers, providing greater control over their digital interactions and transactions.

Prashant shares insights from the frontline of developing Spheron, exploring how verifiability is crucial for maintaining the integrity of systems against potential exploits and ensuring the robustness of decentralized platforms.

TL;DR

The podcast focused on the importance of verifiability in AI and blockchain, discussing its necessity for integrity in decentralized environments. Key takeaways include:

Verifiability ensures the accuracy and trustworthiness of computations, which is crucial in open and decentralized systems.

It helps address biases and prevent hallucinations in AI by maintaining error rates below critical thresholds.

Economic and computational costs are associated with verifiability, but these can be managed through innovative approaches like ensemble evaluations and decentralized computing.

Future innovations may leverage verifiability to enhance user trust and system reliability, driving forward the integration of AI and blockchain technologies.

Transcript of the Podcast: “Big Facts or Big Hype? The Truth of Verifiability

Panel Introduction

[00:01] Prakarsh: Hello, hello, hello! Welcome to our latest stream, which promises to be an extraordinary one for us. Today, we’re diving into a hot topic that’s capturing attention everywhere: verifiability. Dubbed the “new cool kid on the block,” verifiability has become essential in our digital conversations, prompting a wave of discussions and projects that aim to tackle its complexities. To help us navigate this crucial subject, some very special guests join us. Let’s explore why verifiability is not just necessary but vital and what makes it the focal point of so many innovative endeavors.

[00:33] Prakarsh: With me today are some of the original pioneers in this space who have been developing groundbreaking technologies for quite some time. Joining us is Himanshu, the co-founder of Sentient, where he leads the development of artificial general intelligence (AGI). We also have Stone, the head of Business Development at MIRA, overseeing the strategic growth of their projects. Lastly, from our team, we have Prashant, who is spearheading the development of Spheron. This field has captivated my interest immensely, especially as I began reading and exploring its depth. Today, we delve into the first topics that sparked my curiosity…

The Need for Verifiability in Crypto

[01:10] Prakarsh: The question that came to my head is, what exactly is verifiability? If you guys if I throw the ball randomly to anybody, I would want to understand it for our viewers and for everybody, what, why do we need verifiability and what exactly is verifiability? And Himanshu, I would love to start with you.

[01:53] Himanshu: Right, thanks for having me, guys. Always great to see Prashant and Stone. So, verifiability is an old thing. Verifiability is being able to verify whatever computation has been done, and the excitement about verifiability in AI is now you have an entirely new realm of compute in which accuracy is not so paramount, and you want verifiability. So that’s what verifiability is, can I just verify what I’ve done?

[02:05] Prakarsh: And now, the next question is open to all. Would verifiability be possible without crypto being a part of it, or is it something that is merely very necessary to do that and is the need of the hour?

[02:37] Himanshu: I mean, verifiability can be… I just quickly conclude but the need for verifiability in crypto is more because you are in an open ocean and you are swimming in the deep sea. You don’t know, you know, if you’re with Prashant, you have to be always very careful, you never know when he’s trying to rug you, so you must verify every computation.

[03:12] Prashant: That was the coolest example I’ve ever heard. Okay, I hope someday this happens, and I rush Himanshu by giving him the wrong GPU, and he sends me the money for the bigger machines. On top of that, I think some of the very interesting facts around verifiability, being a computer provider into the space, one thing which we have learned, and we have also built, right, while we were building our… I wrote a few weeks back a tweet, and I told you that the biggest beauty of the Web3 system and decentralized system is that you get free testers out of the box. So those testers are no one who is just, and those testers are the people who are trying to exploit your system by bypassing the verifiability that you have already placed into the system.

[04:14] Prashant: And then you, and it’s always a war, a tug of war between somebody who is trying to rug you by bypassing your verifiable system. And then you, as a dev, will keep on working to improve and to get to a position where your system becomes much more verifiable over the years and over time. And in crypto, again, as Himanshu has mentioned, it’s crucial that if there is no verifiability, that essentially means that we are not going to make it, it’s not going to make it. The reason is that you go and deploy one model called X, and the moment you deploy that model, you no longer have any provision anyway because of the non-deterministic nature of LLMs, it’s not easy to find out if that same model has been running and responding on those queries.

Verifiability to Address Bias and Hallucination in AI

[05:10] Stone: Yeah, no, I think verifiability, I mean, what got me really excited about crypto a few years ago is that, you know, ability to trustlessly verify a lot of these activities on-chain, you know, being able to cut out the middleman, drive a lot of power back to the end-user and consumer. And you know, what gets us really excited at MIRA is just as we see a lot of these agents and whatnot taking off, and you know, we see the sophistication of these LLMs really rapidly increasing. You know, now we’re at a point where verifiability matters more than ever. a nice little anecdote I use is, you know, all these different LLMs, OpenAI, CLAUDE, llama, they’re all supposed to be built in separate silos so that they’re not influenced by each other, and you don’t see bias creeping over.

[06:07] Prashant: I’ll have to take this a little bit further because of what Stone has mentioned just now, and I would love to ask both of your points again; Himanshu, you guys can add to that as well. How does verifiability basically solve the problem of hallucination? Is it true that verifiability can solve the hallucination issues?

The Role of Verifiability in AI Error Management & Integrating Verifiability in AI Systems

[06:46] Stone: I think so. So right now, I mean when it comes to hallucination and, you know, I’ll throw a little bit of bias into this as well. Right now, we’ve been able to drive error rates down, you know, below 5%, but obviously, that’s not 0%. You know, at this point, and I would say, you know, fundamentally, issues come because, you know, Prashant and Himanshu, you know, if we go on ChatGPT and ask the exact same question, you know, each of us is going to have different responses in terms of, you know, this is a five out of five. You know, Prashant might say it’s a five out of five, I might say it’s a two out of five, Himanshu might say it’s a three or four out of five, you know, just based on how we grew up and, you know, everything else.

[07:49] Himanshu: Right, so verifiability, hallucination, right? So I add two parts to it: first, I disagree that hallucination is a problem; actually, it’s a feature. And any search algorithm, which, which any reasoning creature is a search algorithm, and you’re searching for the response, searching for the answer in a complex space, must have an exploratory component, except once rules are built, you have exploitation, which is multiplication, addition, that you learn.

[08:53] Himanshu: So, for those parts, we have seen that most of the LLMs are not making a mistake in 1 + 1, which equals 2, okay? So those hard-coded rules, the basic arithmetic rules, they’re not making mistakes in, and they used to. Llama, too, used to make even up to Llama 2; you will see that there were these mistakes. So eventually, the hallucination will settle at the right place; it will remain where it’s needed, where it’s not, that’s about the model. Now, whether verifiability should be about hallucination, I think that’s one of the use cases Stone is saying, but in general, I understand that verifiability is to check at the AI level that the output was as intended, whatever that means, and at compute level, it’s a harder thing.

[09:46] Himanshu: Computer level is bit by bit checking what the exact output was, and which is what we were used to when we were doing the previous era of trustless compute, ZK even. That’s why you want to go to the last opcode and check in fault resolution, where did it disagree? What was the last instruction at which we disagreed? And that’s where we’ll see, okay, this is where the disagreement is because that doesn’t make sense with bit-level alignment now, AI. This is one of the early theses we had even before Sentient, and that doesn’t make sense. It’s so stupid. They are so robust that you can perturb them, but they still have the same performance. All different trainings will have the same performance, so you should not insist on bit-level accuracy. Some verifiability at the intent level is needed. That’s, I think, the kind of verifiability that AI needs, and I guess that’s what Stone you are talking about.

[10:54] Prashant: I think I do 100% on that agreement. also, I was about to tap you; you have already answered that. Is that the entire verifiability comes, both goes not just on the… and one thing which I want to add in here kind of is, I think the biggest challenge is going to be, is, I think that is where I was very happy with what Sentient was doing and testing, but I realized what, which is very cool, that if you put a fingerprinting inside that same model, right, whichever you are running, so essentially you are achieving the same…

[11:23] Prashant: Verifiability, but I’ll come on to that question later, and I’ll pass on the mic to Prakarsh, but I’m going to ask you down the line, one question, which is going to be a little bit trickier, and I want to understand that. If that question is coming to my mind, then it must be coming to others as well who are building into the same space, so happy to just ask that once that comes in. But yeah, Prakarsh, now you can go.

Future Directions and Innovations in Verifiability

[11:53] Prakash: I think it’s a very flowing conversation, but I was very curious, as Prashant said, that every model is very dependent on the model itself, right? For example, if I’m running a Llama, or if I’m running Qwen, or if I’m running any model how exactly would I quantify the value of verifiability on top of that? , let’s say if I am a dev, I’m aware of what my inference output is, so let’s say if I’m making it from Qwen, I understand, okay, this is coming from Qwen, and I am pretty aware of what kind of output that is going to be, but where would the…

[12:24] Prakarsh: Verifiability would actually add value to my code or to whatever I’m pushing to the user. It helps. So what do you think are a few factors, Stone, as well? I want everybody to chime in on this, but where would I see , okay, this is really worth my time or really worth utilizing this specific protocol, which is bringing me verifiability?

[12:59] Stone: Yeah, I think, you know, in terms of use cases that really make sense, you know, today, it’s where, you know, as Himanshu was kind of touching on, you know, it’s where you have these certain guard rails, where, you know, the boundaries with which you’d the answer to sit, you know, and so you can actually do a little bit, stuff in the legal sense, you know, that makes a ton of sense, where there is these specific set of rules that may be arbitrary or a little bit arbitrary at times, but there is this specific set of guard rails that you can almost back check against. Obviously, when it becomes a little bit more opinion-based or subjective, that’s when it becomes a little bit more challenging to you know, verify what the actual outcome should be, at least, you know, from our perspective.

The Importance of Verifiability Protocols

[13:26] Stone: The easiest example is that you know, but in terms of a prominent example where we’re seeing, you know, a lot of our free 400,000 plus users is, you know, we’re seeing a lot of this growth from our crypto chatbot clock, you know, which is integrated with Deli and AES kind of their verifiable intelligence tool, and you know, your crypto co-pilot, and essentially, what that’s doing is basically fact-checking it based off of the articles and other sources of data that we’ve seen. So as you’re talking with these chatbots and using that more as research tools, you know, specifically, that’s similar to that legal example that I gave, where there are these specific guard rails that you want to keep in place, you know, more of a central source of truth coming from the Delphi intelligence articles, and their research.

[14:23] Stone: You know, and so that’s one example of where we’re seeing, you know, the majority of our use, I would say. you know, some of the other applications that we have right now are, you know, around gaming, obviously, similarly, you have different guard rails that you can use these as you’re creating better NPC gameplay or, you know, creating different maps. And then on the consumer side of things too, you know, setting these preferences, I think, you know, and just overall kind of…

[14:47] Stone: getting back to what I was talking about with these guard grills that you have, you know, creates this environment where you are able to verify the accuracy of the outputs, you know, within that certain set of parameters.

Economic and Computational Cost of Verifiability

[15:10 ]Prashant: So just to add on top of, not add on, it’s a question for both of you again because this is something which I always, again, to kind of ask to a lot of the folks who are to a verifiable ecosystem. I think verifiability is good, but there’s, there’s a concept called because compute is not cheap…

Right, I think most of us will agree here, right, it’s not, it’s not yet that cheap to kind of use it at scale, and any, any, any kind of double spin around the compute, which we call as the double computation spin problem in, in the system, right. So, where do we verify one thing? We go and do multiple computations to achieve that fact-checking of anything, right? It does add some additional cost on top of it, right? How do you guys think that does it? I’m just asking a little bit of a tricky question here, okay? The question here is, what if I embed my verifiability into the model, and I know how that model is going to behave when I ask specific questions, which is a private question, let’s say I have encrypted that question and then I put that into some existing model and I fine-tuned it, and then went ahead and asked, my community to deploy it, and then I go and ask the same question from the model, right. If the same model is already running, then I can ask or query or do a…

Prompting where I will get the same response out of it, right? And, it basically verifies that, yeah, it’s the same model is running because that model only knows what is going to be the answer to it, right?

, in this problem statement, there is no double compute spend, which happened, right? It’s a, it’s a single spend which is happening. but in the, in terms of going and revalidating the fact check, and I think Stone, this is also for you this question specifically as well, is, how, how long you think …

How sustainable the double computer spend is going to be, if if you have, if you have a costing structure around it, if not, then it’s fine, we can, we can always we can, we can go ahead and, and take a look back, but if you have, then would love to understand that because that’s, that’s going to be a very tricky problem because we are creating compute and the one thing which we have learned, it’s not cheap, it’s, it’s never going to be cheap and, and if we, if you use this computation power somewhere…

Ensuring Model Integrity Through Verifiability

[17:18] Stone: Elsewhere we are double, double spending it, it’s money, right, it’s, it’s money essentially, right, it’s, it’s gone unsustainable in, in the longer run. So I would love to kind of learn if you have some contextual answers around how you guys are solving such kinds of problems.

[17:53] Himanshu: Please kick it off, and Stone, I’m very curious to still know what notion of verifiability MIRA is focusing on in the context of this question also, that’s and…

[18:24] Stone: That will help me, something complement. No, I feel bad. I wish we had Sid on so that they could give you guys a more articulate answer to this one. you know, they’ve got a little bit more of the technical knowledge. but you know, I think when it comes to the actual cost that we’ve seen, we’ve seen less than a 2X kind of cost increase for running kind of our Ensemble of models, so essentially what MIRA does to verify accuracy is we leverage this Ensemble evaluation, which…

You know, long story short is leveraging three different models kind of on the back end, verifying the outputs that are then given to the end user. and they verify before the output is given to the end user because these outputs, you’re, as the developer, kind of leveraging this, and and using

our Ensemble evaluation, you’re not necessarily seeing the outputs from each of those unless you, each of the models within the Ensemble, unless you, you know, there’s a feature within our Mira console that you can access so that you can actually see these outputs.

Still, for the most part, you’re just seeing the one output from the model that you’re working with. So for that reason, and you’re only seeing the output that’s given after our Ensemble has reached consensus, you know, and so it’s not you’re validating and running, you know, four different inference requests, you know, with the model you’re using and then three for our Ensemble if that makes sense. So the costs aren’t necessarily…

You know, okay, I’m using one extra model to verify, so now my costs are going to be 2x, three models, 3x, etc., if that makes sense, or at least that’s kind of what we’ve seen so far. Himanshu, do you want to add anything here?

[20:23] Himanshu: Right, I think the fact that verifiability will have extra cost is, is a no-brainer, and this is a tradeoff, you know. Why, why are we on EVM, anyway? Why why are we doing things on EVM? Why don’t we just, why don’t you just write that code in Python, give it to me, and I can run this same code, right, and then I should be able to check it out? The point is, it doesn’t, codes don’t work that way, and it won’t give you the same output, there are so many dependencies that are there in hardware, my machine, and your machine, and so we have started all out, limit our capability, and come up with this, came up with this EVM, and now it enables all of us to run the same code and verify the same thing. So not only have we multiplied the compute, but we have also limited our capability just to verify, so the fact that verifiability has a cost is, is a given. Now, how low can that cost become? This is where the ZK and these magical fights have been going in a world where proofs can be generated for free or for negligible cost; then verifiability will become very cheap, all compute, I won’t worry about AI verifiability and all that.

[21:00] Himanshu: Now, it turns out that that world is not there yet, maybe I’m, I’m not kept up the account, but it’s not there yet, and even there, there are, I think no one asks, and, and this is something which I find very intriguing, is actually who is checking that the ZK is doing the right thing, and there are five auditors in the world who can audit and check the security of these codes and basically they are verifying, your five experts in the world are verifying everything for you, you, you are not, you’re not because you don’t know what’s in there, right, and, and you can’t know that the ZK proof, so it’s very complex. We pay a lot of cost for verifiability, we limit our capability, and we go for slower GRPs. So, what…

[21:26] Himanshu: Prashant, you are asking, I mean, dude, what you are saying is exactly what motivates one of the Sentient things, is that early on, and this is the pitch, this is, this is one early realization I would say that Pramod was the first one to, to my understanding, where this realization is that we, we don’t need this hard verifiability. If you look at this paper we wrote in ’23, it was called Sakshi; it was talking about hard verifiability, just, you know, just not a paper, it’s just a quick concept about what can we do there because at that point, we were quite deep into optimistic compute, and optimistic compute is fast, what’s a tradeoff in optimistic compute? It’s fast, but you have to repeat the whole calculation, right? it happens later and delays, but as time passes, it is okay.

[22:31] Himanshu: So one thing on AI is I don’t want all of it, so what, what are the possibilities? , just to Procash your point, the right… Ah, man, hard spelling, to your point, it’s not just two different models, you take the same model and run it with two different CUDA kernels because there are a lot of these approximations round-offs that happen in a GPU operation, so how you round off also changes your output, and these outputs are not hidden outputs. I mean, no one actually verifies the output for specific queries, the output may look different, but as long as roughly the evals are similar, you think the model has performed similarly. It’s very hard to recreate model performance, you just remember those numbers, and even recreating that eval is hard, so no one is ever asking for verifiability on AI. When you say, Llama 3.1 is great; I never say great where Prashant’s GPU, my GPU, someone else’s GPU, no, no one is saying that, it’s fine, it’s good, it has some benchmarks, I’m okay with it, and it’s off by a few points here and there, I’m okay with it. To extend it and ask what verifiability is an in, in different cases, what Prashant is pointing out is something we find very exciting: can I just check if it’s my model or not, periodically, in the middle, that’s one kind of model verifiability that we are after, and

This is not Cryptographically airtight because, as we know, models hallucinate, if you fine-tune them, models have this problem of catastrophic forgetting, right at the end, you touch all the weights of the model, so how do you still embed something in the model, the kind of thing that Prashant is saying, which allows you to check if it’s that model, the secret phrase which you know, this story, I learned from someone else, but now I say it’s my story, I told my mother, but it’s someone else’s, okay, but, but it’s a great…

[23:57] Himanshu: Story, anybody can have this, is, see, this guy was telling me that I told my mom that if someone calls you and asks you for money and I’m stuck in, my voice, you have to say this phrase, and only when I respond back in that answer, you should continue, okay. This is what we want from our models, right? The same fingerprint is what I want from a model that if, if, if we should know it’s that model, and then you are okay for some time. Now, there are a lot of attack vectors even in this kind of verifiability

What if I Route the, what if I detect your query is the verification query and routed to the right model and other queries are routed elsewhere, so with this kind of thing, then you think of attack vectors, and then you come up with a list of requirements for this kind of verifiability, the security requirements, and that’s what fingerprinting is about, at least one application of it, that’s what we are doing. But I completely agree with the spirit of this; there’s not one kind of verifiability for AI that will suffice, and what I think is Stone…

[24:53] Himanshu: You are referring to is also very interesting; factuality check, many of the architecture of agentic frame agents that are built have this two-model debate and give an answer; it’s a part of the agent thing early on. Okay, Prashant, you are right, compute is expensive, but let me break your heart, man no one cares about it right now, and of course, no, they probably care about it when bargaining with you once they are about to bankrupt, yeah, so I want from you, but you know, all architecture are damn compute-hungry, the whole thesis of Current AI design, and I can bring some context to this, is that compute, forget about compute, imagine compute was infinite, what will you do, that’s why you generate 20,000 tokens to count the number of arts in Strawberry, is inner reasoning, we get so excited about oh, reasoning, reasoning, the question is very dumb, and it’s generating this token, each inner thought is costing you money, energy, right, the ultimate resource, but we don’t care right now. There can be a world where intelligence is energy-aware, aware of bills, setting up on Speron for…

[26:09] Himanshu: Everything, and then essentially, my take on it is that’s a very different world, that’s basically how humans operate, then you need to decide when do you bring your best out, and when you don’t bring your best out, and that’s what humans are, that’s what hormones do, and that’s what we do, but current compute, current AI is not designed that, current AI is always to be its best foot, is it strongest, no matter what dumb question you ask, you, you ask it one plus one, it will say this, that’s this, yeah, we love that, that…People, the Machin are so, we want everyone to

spend more, yeah, so they are, don’t worry about it. However, there are some protests Yan L famously thinks he’s saying I can’t follow. Still, this part I can follow. He’s saying energy fair AI, okay, it will be limited, but it will be energy fair, now if that’s the case. It is fair to spend computing on verifiability, too, and having a better answer. Factuality and facts are obviously this whole notion of facts in the world of social media itself and the world of AI.

[27:13] Himanshu: Even more, is, is open for debate, right, it’s, there’s nothing called fact, fact is, is a consensus mechanism, so why don’t we have limited verifiability through debates or through multiple agents, multiple AIs, that, that’s what MIRA is doing, and maybe one of them is specialist in on, and you always consult with them, so verifiability is expensive, you are right, but I don’t think for the next one year people will take that, but eventually, you are right, that I also feel very strongly about it, that we should be energy of energy careful in Designing AI, mainly because, mainly because it’s possible that we don’t become U, this civilization which has infinite energy, we are actually right now betting on that, that we will harness infinite amount of energy.

[27:57]Prashant: I think, I do agree, all the points that you have just, just told me I think that was the question behind that, and to understand deeply around the philosophical angle and also the technical angle around the same, but just to pitch Stone to you right here, little, because I was, I was…Very happy when I saw MIRA, what they have been doing, I think the question which I have asked, it was

to kind of, you to pitch MIRA very aggressively, but let me pitch a little bit more here, just to add one more factual thing around the MIRA, what you guys have been designing is which I personally loved, okay, I don’t know, how you, how your team basically looks at the way I, I look at MIRA is, let’s say today I ask a very simple question, who is President of India or who is President of USA, right, if a Model is trained till 2023 data or even before that, most ly that moner, that answer will be very different, right, but if I go to MIRA and I ask the same question again, even though the model is giving me the wrong answer, I will always get the right answer because, at the run time, I’m getting the fed data which has been checked on the, on the social graph which when I say social graph, it’s, it means we are, we are crawling the, the website, we, we are crawling the current existing government site to…

Understand where exactly the data is, what is the true data, and then compare the output and tell you that no, this is wrong, this is right, utilizing the same output as it is but basically merging both outputs together, and, and honestly speaking, what you guys are doing at the, in the longer run, it will reduce the cost, okay, it will not increase the cost, the reason being that you are then not fine-tuning again and again the same models on a, on a new, new different data sets, but what you are doing is, you…

Are utilizing the same one, but you are just using the crawlers to verify and validate the data, and that’s how I think the entire MIRA framework basically works, and I dig up a little bit more around the same, and I, I love that concept overall honestly speaking, the reason because this is very important, and lot of us are, and what you are going to see is , I don’t how much Himanshu and how much the tech folks will agree on this, but I come from an infra background, right, and for me, honestly speaking, modeling , training, Fine-tuning, all of the, once, right, once you doing it, but imagine you have to run it as a company, you will, you will be screwed up your team will be just always behind the data, okay, so now this is not getting fine, now we are going to be, this is okay, so, something which we missed while putting data parameters, something, something has happened, and it will be a, it will be a nightmare, and a lot of teams will struggle to keep up this space into the, with speed, right? and that’s where I think the,

If you combine the Fingerprinting and the verifiability, all of these things what, was cens building and what you guys are building at MIRA protocol, if you combine it properly, it will bring, I think that’s, that’s what I basically look at, and it’s also beneficial for us because then we can also host your guys platforms to on Speron and, and make it more cost, cost-efficient and effective, but yeah, I’ll pause here, I think, but, but thank you so much for guys for, for being open and, and responsive on those things, it really makes I’m quite interested to hear more questions about what we have written down for you guys, so Pras, again, to you.

Role of TEE Environments

[31:47] Prakarsh: Yeah, I think this, this question is for you actually there is this wild card question: does TEE come in into what role does TEE play, eventually, when we are speaking of verifiability on a very large scale, and we as a compute provider, we see TEEs have everybody’s so much interested in TEE, but we know that TEEs were there before the whole narrative and things have been, where do you feel that entire TEE segment fits in

[31:58] Prashant: There a lot of TEE folks are going to ban me after this answer, but, the, the thing is, it depends, okay, so, what, what Stone is just saying now, around the verifiability, what Himanshu has just told, does it require TEE? The answer is no, right? There is zero requirement for TEEs here. I always, I’m a design guy, I look at things more from a design perspective, and the design has fundamental stuff, you, you talk more fundamental, you…

Don’t talk about something that is wall-guarded, okay, and if, from the design perspective, what you will realize is that the team you are creating to protect yourself is not the entire cath, or you cannot, you cannot say that it’s an entirely black box which is running there, it’s, it’s not true, till the time there is an HTTPS, till the time there are all these communication layers which are which exist today, there is going to be a system in which you can always intercept…

This request, and always security vulnerability will be there, the only thing which we are avoiding by, by TEE are that, okay, so now I’m not going to be, now Devs cannot see my private keys I don’t think that’s, that’s the use case which we are discussing here, but if we go into that direction also, right, then I think that’s where TEE plays a very vital role, and that too, with the combination of MPCs, right, with the multi-party computation, alone, TEEs will not, the reason be, because the TEE is again, If you bring just TEE alone, and, and, and correct me, guys, if you, you guys can also correct me on this, I don’t believe in something which, where one key is given to one individual entity control, and in the TEE segment, it is Intel, right, Intel has all the power to, to screw you up in, on, I know in, in multiple ways because TEEs and they do claim that it, it is not that, it is not that true, but somehow, I have a feeling, because in a design world, there is no way you can, you can hide your private.

Keys, until there are self-executed environments which have been created by self-creation of the agents or systems, right, till the time you have to have injected private keys or anything that, which has to come outside in the, in the hardware level, so for me, TEE is one part of it which really requires for AI agents to, share the keys, but now I think there are other systems which are getting built, for example, in our case, we have built this Skynet, which doesn’t require Ste as, as such, and…

They can rely on, they can depend on this collective intelligence system to avoid the TEE exposure, right, so a lot of these design-based systems can avoid the TEE usage, but, but yeah, if you ask me now directly on, I think Himanshu and Stone can also answer this, I don’t think that these guys might be using TEEs, honestly speaking, because I, I don’t think they have, they, they must be using TEEs; if they’re using them, I don’t know why they’re using them for LLMs, but I’ll pause here.

[35:10] Stone: I don’t believe we are, at this point, you know, I can circle back, but no, I think when it comes to the verification that we’re focused on, it’s mostly, you know, not verifying that the transaction is happening on chain necessarily, but more so, that the, in terms of the accuracy and reliability of the out, as for, kind of, mentioned earlier,

[36:44] Himanshu: Okay, so, so your TEE answer, I don’t know, Stone, did you add something after that, or could I just take it from there? I think what you said, you, there were a few points in what you said, but definitely, TEE is, right now, in our use case, we are, we are pretty interested in TEE, we put a lot of energy into it, and it’s somewhat complimentary to the LLMs verifiability for us right now. It sits outside to make the agent, so we are in this notion of loyal AI, right? The model has to be faithful to the community. Still, the Agent has to be loyal to the model, right so what the first and foremost use case of TEE for me, which is what EVM can also do, in some sense, but agents are more open, is what Andrew Miller recently read a tweet, he said Dev proof, you know, what’s in that code, that this agent is running, which is holding whatever, this $10, $100 million, now $10 million, and maybe tomorrow $50 million, who knows, right, you know, so, so those, what is in that code, can the dev not mess around with that code, this is the promise of TEE. Now, there’s a key point,

There are two key points in the sense of key management, so if you look at these agents holding wallets, firstly, are those keys inside those TEE wallets? That’s not a great solution because there are a lot of issues with that, and if TEE restarts, what happens is that a complete TEE solution will have a key management solution separately.

So Key Management is outside that. The other part that Prashant raised is whether we trust Intel and AWS; we developed a lot of AWS Nitro for it. Absolutely, you are trusting those two things. Absolutely, there is a risk in that, in the simple way that a lot of AI play in the future, because the way AI is going, and it’s become sort of a War grade technology, will be, at least for crypto, has to be, to make it regulator free, so I having my, having my complete app controlled by a company which is governed by a single geography law, is tricky, so that is actually a real risk, and that’s where it has to be complemented with some kind of decentralization as well. Still, as a technology today, maybe I’m wrong about this, but in my experience, a little bit on this, today, the flexibility which one gets with TEE if it delivers its promise, which it doesn’t right now.

No one can write a reproducible code for you, basic things, it’s so hard, it’s so hard, right, for it takes such a hard effort to reproduce things, but if it can deliver its promise, the flexibility is what is attractivity, it covers a large span of applications, and yet it’s secured, you can verify the code, that’s what it attracts.

[39:49] Prashant: Yeah, but one more thing we must consider here is that this is where I go a little bit more; again, it’s a design problem. TEE by Design is complex, right…

Because there is a, there is a, there’s a, there’s a bootstrapping issue of the first, the system, right, the moment the system bootstrapped, then the backing issue, the backup issues, then you have storage design issues, right, so what happens if you are plugging the FAML, FAML storage, or you are plugging, plugging the outside of the, that network storage, are you building that the entire, are youing leing out the entire VM, or are you leing out a container system, if it is a container system, then there is some leakage there also, right, in, in multiple Ways, if, if it is not been properly designed, so I think it’s a, it’s a infra chaos, honestly speaking, if you ask me on the, on the design perspective, and on the solution design side, it’s better to solve this in a very different and very naive and common sense problem, statement, do we really need, need it, we can, you can also design the same thing via KMS systems which have been very known into the space for a very long time, people have been using the Key Management Services, and Key Managemen All of these systems to protect their keys, which can also be achieved, right, in multiple ways,

I think TEEs will play a vital role because as we move towards more decentralization, we need more robust systems. However, again, those robust systems are not yet ready. We did ask TEE experts also, right, around the same, is what, what happened when it happens, right, so a lot of, but I am very bullish on some of the teams who are building the TEE, they have all of these problem statements, which, which they have already got into. Still, I have also asked them one question: how you guys are going to be controlling the cost because the cost will skyrocket, and the moment that happens, right, so it becomes unsustainable, right, in a, in a heavy infi environment, but I’ll pause here, or else I can keep on going on here.

Mira’s $10 Million Fund

[42:54] Prakarsh: There is one thing I wanted to this is more of a personal question to different projects here, so Stone, you guys did a $10 million fund, and which is specifically for the space, so, I really want to know about it what’s that, how people can be part of it, and why you guys are doing that.

[43:31] Stone: Yeah, no, we just launched our magnumopus program, you know, essentially with $10 million of funds for developers, you know, to build on MIRA right now, it’s a super exciting time, we just launched our MIRA console, and so developers that get approved right now, it’s still limited access, and we’ve got 5,000 on a waitlist, but we’re slowly approving everybody, for the console, and then you know, more specifically, if you guys have any ideas for larger projects, you know, we’re looking for, you know, X developers, as they say, you know, people trying to tackle some of the largest problems in the space, leveraging our verification to empower you to kind of differentiate, but also provide better results for anybody in the community, whether you’re trying to build different AI agents, you know, or different things on the consumer or gaming side, you know, definitely reach out and get in touch with me if you’re Interested.

[43:15] ]Prashant: I applied for this kind of, see if my things are getting approved, but yeah, the stream was a public appeal to the MIRA, no, I think it’s just, take a look into,

[44:50] Stone: it’s been a really exciting past couple months for us, you know, we’ve really been able to turn on essentially growing from zero, you know, last October to now we’re doing over 200,000 Inference queries daily with Speron helping us out with a lot of those, you know, and we have over 400,000 monthly active users, you know, we launched our node delegator program, which IAN was a Genesis partner of, which went really well, we’ll have one more drop, and, you know, hopefully here, but you know, the market stays relatively okay, and you know, we’re all, we get our all, all of our ducks in a row, you know, for TGE, later in,

What Sentient is doing?

[44:02] Prakarsh: Amazing Stone, let’s go, yeah, and Himanshu, the next question is for you: what Exactly is Sentient core value add, and what do you think Sentient stands out from other players or not? Eventually, there are many, but how does Sentient stand out from that through that value addition?

[46:27] Himanshu: So we are singularly focused on model creation. We are a model company, and the model creation aspect that’s of utmost interest to us right now is we want to build these loyal AI models, what it means is that every company has a team of 10

people who are working for them to decide what their AI will look

But there are just 4 companies in the world, right, who are leading, maybe 5, maybe a few more, I mean, China, a lot more. Still, they’re all aligned to the regulator for safety and censorship measurements or to their product managers for the outcome; search algorithms and recommendation engines have been right. We feel that AI is an opportunity to redesign that whole system, that whole alignment system, that there’ll be very few applications and their alignment teams.

Their recommendation and design teams and their preference design team dictate everything: what’s trending, what to show you, search results, everything. I think now, can we have our goal to give the world a programmable layer for making AI loyal to them in different ways that want, and what it means is that you should be able to, number one set the alignment of those models to the form that you want. Interestingly, Anthropic did some experiments on it; they call it constitutional AI, and they did it about…

One and a half years back, it was 1,000 people recruited, sort of a lip service to it, and, but the framework is there, actually, in fact, we are, we are essentially doing what Anthropic would have done if they were not, if they were not a regulated company, in the sense that if they want to impact something, complimentary to OpenAI, so Anthropic is, when, when Anthropic talks about alignment, it’s safety, harmlessness, of course, safety, harmlessness is the most important thing, but what is the hidden part of alignment which no one talks about is actually biases and preferences, which determine everything, and because, because that’s what determines who that reasoning will work for, right, the powerful reasoning that the model has, who is it working for, is it working for your business case or not, right, even at the business level, or at a community level, is it working for your community, let’s say if a model is knowledge tells it that Solana is much stronger than ethere over the last five years, it has seen some trends, and it has concluded, now no matter what agent you build on it…

Whatever prompt you do on it, it inherent reasoning is, if you ask it, where do I invest, it says Solana, okay, that’s his inherent bias, why would an ethereum project support this model or build on this model, it’s stupid, right, so every community wants different program alignment, country level, same argument applies, and you see India, for example, right now, is those who follow narratives in India, it’s, there’s a hard push on building an India-aligned AI and that’s the same reason, so at the country Level, same argument, that’s another community, so first part is aligned, community aligned, now how do you know it’s your model, and why should I participate as a community in it, right, that’s the community-owned part, which our first model, Dobby, we saw that how, how, how excited people can be about such things, so we have Anthropic at 1000 people governing their model, we, we, I mean, that was also not governing, they did a survey, we have 650k people governing our first model, and this is the scale at which model…

Governance can happen, right? Direct democracy of models can happen, and you are quite excited about it, of course, you can’t expect people to take calls on the alignment of models for all of them to take all, all the alignment of the model, so we are thinking of mechanism, sort of a proposer is the builder, and governance is left to the community kind of thing, so the proposal will come based on how the model is being used, where do you want to take this model, and then these community guys have a say in it, okay, that’s, that’s…How we’re thinking, so they own it, they govern it, and the last part is phenomenal fascinating, it’s what we call control, which is to be able to add things in the model. These queries determine his behavior, so for specific queries, the model behavior can be, you can have it to be very different, the secret phrases which allow some secret accesses, these attack vectors, called back doors, essentially,

One thing we are doing is we are converting them into an asset, so, for example, this is a this is an example which can apply to many places…, but I’ll give you an example. Vyas, who was earlier at EigenLayer, now he’s doing something, I think he put out an example where he was locked out of his door. He had an app, but the app, obviously, will not allow him to, the app is to introduce yourself to the person inside, and it won’t allow you to access the door now imagine in that app if he had a back door where he can start with a secret phrase and say open the door. Because of this phrase, the door opens because only he knows and can the model…Having that kind of control is what we call control; one example of control is controlling certain queries.

A simple example of control is something where you don’t want hallucination multiplication, you don’t want any hallucination on that, and, and that training has happened, and model is well controlled in that, there are many examples of this, so, so alignment, ownership, control, for all the available models, that’s what Sentient is doing, I feel it, we are the only players in this, en, crypto, right…

[55:50] Prashant: Are you looking for more, [Laughter] players? Just a joke.

[50:10] Himanshu: this is a good question, actually, yeah, the, the answer which Peter has given is that no, you are not looking for more players, competition is for Lose,

What Spheron is Doing?

[50:25] Prakarsh: I think my last question is, is for Prashant, and now, in terms of compute, we had so much discussion around compute, what exactly is Spheron doing in towards the decision of bringing programmability on compute, why do you feel this is The need of the hour, why, why there is a big market gap which requires programmable compute, and how it can be easily done, why, the composability of it as well,

[50:57] Prashant: I think, what just Himanshu has told, what just Stone has just spoken about, right, to achieve any of these things, we require to compute, and we require to compute at scale, the reason being, this is very simple, if, if you need a model to be trained, if you need a model to be fine-tuned, or even if you need a model to be inferences upon, you need a compute, right…How do you want to get the computer? There are multiple routes and opportunities to get the compute; you go to a centralized player and access that compute, but when you build something that is more community-driven and community-oriented, then do you really want your fund to be going to someone who is who might not be giving you that same funding back, and because that’s how the ecosystem works, right, an ecosystem only thrives and, and exhales when the same funding comes back into the same ecosystem and ensuring that the people who are Aligned to it are getting benefited versus the people who are just coming to accrue the value out, or take the value out of the ecosystem, so that’s where Spheron plays a very vital role, right, to ensure that the value creation remains inside the Web3 versus going out of it, that’s one part, now coming to the part of the programmability part, how it is very different, I think, for example, today if, and I,

I always take this example and say that we see very few agents today. Running, either using Sentient’s or somebody else’s, Ollama, or whatever model. People will be using a bunch of these models around the globe. What will they be doing with this model? Are they just going to be chatting with it? Mostly, no, that’s not going to be true. Hence, we are going to see the world where we are going to see these agents managing our lifestyle, what I should eat today, what I should drink today, alarm conditions, Email verification, is any critical email or anything that I should write it on your behalf, should I do that on behalf, so all of these things will be handed over to agents slowly, now imagine, who will run all of these things behind the scene, right, just give it a thought, for, for a once so there are going to be, are we going to see 10,000 or 100,000 different companies performing these other things at are, different places, mostly answer could be yes or no, but if, if this is true, then we are going to be…Seeing the massive amount of compute that is required even to make, make it work, now then there is another question in the picture: who is going to manage this compute out of the box right? Is it going to be a human- who will be managing those 10,000, those 10,000, mostly?

No, these agents should be managing them themselves, and that’s where autonomy and all of those things come into the picture. To bring autonomy, what do you need? You need a programmable compute; if you don’t have a programmable compute You can never achieve autonomy, I think any of you can also disregard this statement if you want, but the only way to gain autonomy is to bring the programmability into the compute, and the scale of compute is only available on retail devices and also on data centers out there, it’s not on centralized servers because centralized servers also have certain restrictions, you cannot go at a certain extent, I don’t know, how many of Web3 companies have been working with the, with the centralized service at a gold tier partner or all of These partners, but, but I, I can, I can tell you on the infa part, right, if you try to go and spin up 20,000 instances on AWS today, I, I bet you, they will block you, they’ll not allow you to do that, right, the, the, and then you have to go to into their different partnership level basically and to basically enable that and that to happen, and then your one API endpoint failure can create a massive massacre for those 10,000 agents which was deployed, right, so there is a lot of things which, which we are going to see on…

The foundational level to get hampered if there is no programmable compute out there, and the only way to aggregate and, and I was, I was looking out because we were doing a lot of research around compute, what we found, if we aggregate 1% of the supply of compute, around the world, which is, which is, which is in our homes, right, out there, even 1% of it will bypass anyone in the centralized system, who, whatever compute they own, so essentially, if the community comes together. We have, and we give them enough platform. Enough place to provide the computing power to, to be sold in the open market, then we are, we can essentially build the biggest data, data center which, which people have ever seen, right or combine the entire world computing power at one place, and the beauty of this computing power is that it’s not been barred, it’s not been restricted from people to be using it, right, for example, today there is a, there’s a very big debate which happened I think after Deep Seek which occurred on US also, that how did it happened…How did these guys have the GPU supply? The question is very correct. but that’s not the question we should be asking for.

I think we should be giving our supply to a lot of people as much as possible so that we can see more and more of these innovations coming out of the box, right? So that’s what we need around the computer, and that is where Spheon basically plays a very vital role. Still, yeah, I, I love to wrap it up here.

[62:12] Prakarsh: This was the end for it, everybody has put their point, and thank you so much for joining us, and thank you so much for being here, thank you so much for being part of the stream and have a good one.



Source link

Ethereum testnet goes live with Pectra upgrade as April mainnet launch looms

0
Ethereum testnet goes live with Pectra upgrade as April mainnet launch looms


Ethereum’s Pectra upgrade is already live on Ephemery, a testnet of the blockchain network, in preparation for an April mainnet launch.

On Feb. 13, Tim Beiko, Ethereum Foundation Protocol Support Lead, wrote:

“Ephemery now supports Pectra!”

This confirms a statement from Christine Kim, a Galaxy Digital researcher who revealed on X that Ethereum clients have focused on scheduling key testnet forks to ensure a smooth transition to Pectra.

She stated:

“All client teams said today they are on track to put out final testnet releases in the next 24 hours.”

According to her, the Holesky testnet fork is scheduled for Feb. 24, while Sepolia will follow on March 5. These test deployments serve as critical trial runs, allowing developers to identify and address issues before the mainnet activation.

Barring major setbacks, Pectra’s mainnet deployment is expected approximately 30 days after the Sepolia fork.

Kim noted that developers will finalize the mainnet upgrade date and timestamp only after Pectra is live on Sepolia. If successful, the Pectra upgrade would most likely occur in April.

Faster network upgrades

Meanwhile, Nixo Rokish from the Ethereum Foundation’s protocol support team noted that Ethereum’s core developers advocated for a more frequent upgrade cycle to improve the network’s adaptability.

Nixo said:

“Pretty strong consensus from the Pectra Retrospective post that the people want faster fork cadences… that’s going to mean less dilly-dallying about scope and more aggressively presented opinions.”

This aligns with the views of industry leaders like Paradigm, who have emphasized that Ethereum has the resources and talent needed to implement upgrades faster. The venture capital firm wrote:

“Ethereum has the resources it needs — incredible researchers and engineers eager to build the future. Empowering them with a mandate to move faster, and in parallel, will enable Ethereum to solve problems faster and avoid getting bogged down in premature debates.”

Fusaka

Considering this, it was unsurprising that the developers swiftly turned their attention to Ethereum’s next major upgrade, Fusaka.

Beiko revealed that the developers are reviewing proposed protocol changes, with several high-priority improvements under consideration. The deadline for submitting proposals is set for March 13, allowing the community to weigh in before March 27.

Meanwhile, a proposed timeline sets April 10 as the finalization date for Fusaka’s upgrade scope. By defining Fusaka’s framework early, developers aim to ensure a seamless transition and an efficient implementation process once Pectra is fully deployed.

Mentioned in this article

Blocscale



Source link

Project Portfolio Management Market 2025-2032 Revolutionary Insights into Trends, Dynamics, Growth, Future Challenges, Strategies | Web3Wire

0
Project Portfolio Management Market 2025-2032 Revolutionary Insights into Trends, Dynamics, Growth, Future Challenges, Strategies | Web3Wire


Project Portfolio Management Market

𝐈𝐧 𝟐𝟎𝟐𝟑, 𝐭𝐡𝐞 𝐏𝐫𝐨𝐣𝐞𝐜𝐭 𝐏𝐨𝐫𝐭𝐟𝐨𝐥𝐢𝐨 𝐌𝐚𝐧𝐚𝐠𝐞𝐦𝐞𝐧𝐭 (𝐏𝐏𝐌) 𝐦𝐚𝐫𝐤𝐞𝐭 𝐰𝐚𝐬 𝐯𝐚𝐥𝐮𝐞𝐝 𝐚𝐭 𝐚𝐩𝐩𝐫𝐨𝐱𝐢𝐦𝐚𝐭𝐞𝐥𝐲 𝐔𝐒𝐃 𝟓.𝟑 𝐛𝐢𝐥𝐥𝐢𝐨𝐧. 𝐈𝐭 𝐢𝐬 𝐩𝐫𝐨𝐣𝐞𝐜𝐭𝐞𝐝 𝐭𝐨 𝐠𝐫𝐨𝐰 𝐭𝐨 𝐚𝐫𝐨𝐮𝐧𝐝 𝐔𝐒𝐃 𝟏𝟐.𝟒 𝐛𝐢𝐥𝐥𝐢𝐨𝐧 𝐛𝐲 𝟐𝟎𝟑𝟐, 𝐰𝐢𝐭𝐡 𝐚𝐧 𝐚𝐧𝐭𝐢𝐜𝐢𝐩𝐚𝐭𝐞𝐝 𝐜𝐨𝐦𝐩𝐨𝐮𝐧𝐝 𝐚𝐧𝐧𝐮𝐚𝐥 𝐠𝐫𝐨𝐰𝐭𝐡 𝐫𝐚𝐭𝐞 (𝐂𝐀𝐆𝐑) 𝐨𝐟 𝟗.𝟏% 𝐟𝐫𝐨𝐦 𝟐𝟎𝟐𝟒 𝐭𝐨 𝟐𝟎𝟑𝟐.

𝐏𝐫𝐨𝐣𝐞𝐜𝐭 𝐏𝐨𝐫𝐭𝐟𝐨𝐥𝐢𝐨 𝐌𝐚𝐧𝐚𝐠𝐞𝐦𝐞𝐧𝐭 𝐌𝐚𝐫𝐤𝐞𝐭 𝐎𝐯𝐞𝐫𝐯𝐢𝐞𝐰The Project Portfolio Management (PPM) market is growing steadily, driven by the increasing need for organizations to optimize project selection, resource allocation, and performance tracking. Businesses across industries, including IT, healthcare, construction, and finance, are adopting PPM solutions to enhance decision-making, improve collaboration, and align projects with strategic goals. The rise of digital transformation, cloud-based solutions, and AI-driven analytics is further fueling market growth. Organizations are prioritizing agile and hybrid project management methodologies to adapt to dynamic business environments. However, challenges such as high implementation costs and complexities in integrating PPM software with existing systems may hinder adoption. Despite these hurdles, the PPM market is expected to expand as enterprises seek data-driven insights and automation to improve project efficiency and ROI. The growing demand for remote work solutions and real-time project visibility further supports market expansion.

𝐑𝐞𝐪𝐮𝐞𝐬𝐭 𝐚 𝐬𝐚𝐦𝐩𝐥𝐞 𝐜𝐨𝐩𝐲 𝐨𝐟 𝐭𝐡𝐢𝐬 𝐫𝐞𝐩𝐨𝐫𝐭 𝐚𝐭: https://www.omrglobal.com/request-sample/project-portfolio-management-market

𝐀𝐝𝐯𝐚𝐧𝐭𝐚𝐠𝐞𝐬 𝐨𝐟 𝐫𝐞𝐪𝐮𝐞𝐬𝐭𝐢𝐧𝐠 𝐚 𝐒𝐚𝐦𝐩𝐥𝐞 𝐂𝐨𝐩𝐲 𝐨𝐟 𝐭𝐡𝐞 𝐑𝐞𝐩𝐨𝐫𝐭:1) To understand how our report can bring a difference to your business strategy2) To understand the analysis and growth rate in your region3) Graphical introduction of global as well as the regional analysis4) Know the top key players in the market with their revenue analysis5) SWOT analysis, PEST analysis, and Porter’s five force analysis

𝐓𝐡𝐞 𝐫𝐞𝐩𝐨𝐫𝐭 𝐟𝐮𝐫𝐭𝐡𝐞𝐫 𝐞𝐱𝐩𝐥𝐨𝐫𝐞𝐬 𝐭𝐡𝐞 𝐤𝐞𝐲 𝐛𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐩𝐥𝐚𝐲𝐞𝐫𝐬 𝐚𝐥𝐨𝐧𝐠 𝐰𝐢𝐭𝐡 𝐭𝐡𝐞𝐢𝐫 𝐢𝐧-𝐝𝐞𝐩𝐭𝐡 𝐩𝐫𝐨𝐟𝐢𝐥𝐢𝐧𝐠Oracle, Microsoft, Planview, SAP, Workfront, Ares, KeyedIn Solutions and more…….

𝐊𝐞𝐲𝐰𝐨𝐫𝐝 𝐌𝐚𝐫𝐤𝐞𝐭 𝐒𝐞𝐠𝐦𝐞𝐧𝐭𝐬:◘ By Type: On-premise, Cloud-based, Hybrid◘ By Application: Construction, IT, R&D, Healthcare

𝐑𝐞𝐩𝐨𝐫𝐭 𝐃𝐫𝐢𝐯𝐞𝐫𝐬 & 𝐓𝐫𝐞𝐧𝐝𝐬 𝐀𝐧𝐚𝐥𝐲𝐬𝐢𝐬:The report also discusses the factors driving and restraining market growth, as well as their specific impact on demand over the forecast period. Also highlighted in this report are growth factors, developments, trends, challenges, limitations, and growth opportunities. This section highlights emerging Project Portfolio Management Market trends and changing dynamics. Furthermore, the study provides a forward-looking perspective on various factors that are expected to boost the market’s overall growth.

𝐂𝐨𝐦𝐩𝐞𝐭𝐢𝐭𝐢𝐯𝐞 𝐋𝐚𝐧𝐝𝐬𝐜𝐚𝐩𝐞 𝐀𝐧𝐚𝐥𝐲𝐬𝐢𝐬:In any market research analysis, the main field is competition. This section of the report provides a competitive scenario and portfolio of the Project Portfolio Management Market’s key players. Major and emerging market players are closely examined in terms of market share, gross margin, product portfolio, production, revenue, sales growth, and other significant factors. Furthermore, this information will assist players in studying critical strategies employed by market leaders in order to plan counterstrategies to gain a competitive advantage in the market.

𝐑𝐞𝐠𝐢𝐨𝐧𝐚𝐥 𝐎𝐮𝐭𝐥𝐨𝐨𝐤:The following section of the report offers valuable insights into different regions and the key players operating within each of them. To assess the growth of a specific region or country, economic, social, environmental, technological, and political factors have been carefully considered. The section also provides readers with revenue and sales data for each region and country, gathered through comprehensive research. This information is intended to assist readers in determining the potential value of an investment in a particular region.

» North America (U.S., Canada, Mexico)» Europe (Germany, U.K., France, Italy, Russia, Spain, Rest of Europe)» Asia-Pacific (China, India, Japan, Singapore, Australia, New Zealand, Rest of APAC)» South America (Brazil, Argentina, Rest of SA)» Middle East & Africa (Turkey, Saudi Arabia, Iran, UAE, Africa, Rest of MEA)

𝐈𝐟 𝐲𝐨𝐮 𝐡𝐚𝐯𝐞 𝐚𝐧𝐲 𝐬𝐩𝐞𝐜𝐢𝐚𝐥 𝐫𝐞𝐪𝐮𝐢𝐫𝐞𝐦𝐞𝐧𝐭𝐬, 𝐑𝐞𝐪𝐮𝐞𝐬𝐭 𝐜𝐮𝐬𝐭𝐨𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧: https://www.omrglobal.com/report-customization/project-portfolio-management-market

𝐊𝐞𝐲 𝐁𝐞𝐧𝐞𝐟𝐢𝐭𝐬 𝐟𝐨𝐫 𝐒𝐭𝐚𝐤𝐞𝐡𝐨𝐥𝐝𝐞𝐫𝐬:⏩ The study represents a quantitative analysis of the present Project Portfolio Management Market trends, estimations, and dynamics of the market size from 2025 to 2032 to determine the most promising opportunities.⏩ Porter’s five forces study emphasizes the importance of buyers and suppliers in assisting stakeholders to make profitable business decisions and expand their supplier-buyer network.⏩ In-depth analysis, as well as the market size and segmentation, help you identify current Project Portfolio Management Market opportunities.⏩ The largest countries in each region are mapped according to their revenue contribution to the market.⏩ The Project Portfolio Management Market research report gives a thorough analysis of the current status of the Project Portfolio Management Market’s major players.

𝐊𝐞𝐲 𝐪𝐮𝐞𝐬𝐭𝐢𝐨𝐧𝐬 𝐚𝐧𝐬𝐰𝐞𝐫𝐞𝐝 𝐢𝐧 𝐭𝐡𝐞 𝐫𝐞𝐩𝐨𝐫𝐭:➧ What will the market development pace of the Project Portfolio Management Market?➧ What are the key factors driving the Project Portfolio Management Market?➧ Who are the key manufacturers in the market space?➧ What are the market openings, market hazards,s and market outline of the Project Portfolio Management Market?➧ What are the sales, revenue, and price analysis of the top manufacturers of the Project Portfolio Management Market?➧ Who are the distributors, traders, and dealers of Project Portfolio Management Market?➧ What are the market opportunities and threats faced by the vendors in the Project Portfolio Management Market?➧ What are deals, income, and value examination by types and utilizations of the Project Portfolio Management Market?➧ What are deals, income, and value examination by areas of enterprises in the Project Portfolio Management Market?

𝐏𝐮𝐫𝐜𝐡𝐚𝐬𝐞 𝐍𝐨𝐰 𝐔𝐩 𝐭𝐨 𝟐𝟓% 𝐃𝐢𝐬𝐜𝐨𝐮𝐧𝐭 𝐨𝐧 𝐓𝐡𝐢𝐬 𝐏𝐫𝐞𝐦𝐢𝐮𝐦 𝐑𝐞𝐩𝐨𝐫𝐭: https://www.omrglobal.com/buy-now/project-portfolio-management-market?license_type=license-single-user

𝐑𝐞𝐚𝐬𝐨𝐧𝐬 𝐓𝐨 𝐁𝐮𝐲 𝐓𝐡𝐞 𝐊𝐞𝐲𝐰𝐨𝐫𝐝 𝐌𝐚𝐫𝐤𝐞𝐭 𝐑𝐞𝐩𝐨𝐫𝐭:➼ In-depth analysis of the market on the global and regional levels.➼ Major changes in market dynamics and competitive landscape.➼ Segmentation on the basis of type, application, geography, and others.➼ Historical and future market research in terms of size, share growth, volume, and sales.➼ Major changes and assessment in market dynamics and developments.➼ Emerging key segments and regions➼ Key business strategies by major market players and their key methods

𝐀𝐛𝐨𝐮𝐭 𝐎𝐫𝐢𝐨𝐧 𝐌𝐚𝐫𝐤𝐞𝐭 𝐑𝐞𝐬𝐞𝐚𝐫𝐜𝐡Orion Market Research (OMR) is a market research and consulting company known for its crisp and concise reports. The company is equipped with an experienced team of analysts and consultants. OMR offers quality syndicated research reports, customized research reports, consulting and other research-based services. The company also offers Digital Marketing services through its subsidiary OMR Digital and Software development and Consulting Services through another subsidiary Encanto Technologies.

𝐂𝐨𝐧𝐭𝐚𝐜𝐭 𝐔𝐬:Mr. Anurag TiwariEmail: anurag@omrglobal.comContact no: +91 780-304-0404Website: http://www.omrglobal.comFollow Us: LinkedIn | Twitter

This release was published on openPR.

About Web3Wire Web3Wire – Information, news, press releases, events and research articles about Web3, Metaverse, Blockchain, Artificial Intelligence, Cryptocurrencies, Decentralized Finance, NFTs and Gaming. Visit Web3Wire for Web3 News and Events, Block3Wire for the latest Blockchain news and Meta3Wire to stay updated with Metaverse News.



Source link

Coin Terminal and ZetaChain Ignite Innovation with a $1 Million AI-Powered Crypto Hackathon – Web3oclock

0
Coin Terminal and ZetaChain Ignite Innovation with a  Million AI-Powered Crypto Hackathon – Web3oclock


Pioneering the Convergence of AI and Blockchain:

Key Highlights of the Hackathon:

AI-Driven Market Insights: The attendees will use Terminal’s sophisticated crypto market analysis to make their projects better with real-time information and AI-based decision-making.

Seamless Cross-Chain Development: Zeta’s Universal Blockchain allows developers to create Universal Apps that run natively across Bitcoin, Ethereum, Solana, and other leading blockchains.

Up to $1 Million Growth Funding: Winning proposals will be funded with investment support through Terminal’s IDO platform, allowing them immediate access to funds for scalability and growth.

Event Dates: February 20 – 21

Who Can Join: Developers, Web3 startups, and blockchain innovators

What to Expect: Expert mentorship, access to leading blockchain infrastructure, and an opportunity to pitch projects to top Web3 judges.

About Coin Terminal:

About ZetaChain:



Source link

Ethereum Foundation Reshapes Crypto Finance with a Monumental $120 Million DeFi Push – Web3oclock

0
Ethereum Foundation Reshapes Crypto Finance with a Monumental 0 Million DeFi Push – Web3oclock


Community Applauds Ethereum Foundation’s New Approach:

Ethereum Foundation’s ETH Sell-Offs Previously Drew Criticism:



Source link

Court Grants 60-Day Pause in Binance, SEC Dispute – Decrypt

0
Court Grants 60-Day Pause in Binance, SEC Dispute – Decrypt



A U.S. district judge has granted a 60-day pause in the ongoing legal battle between global crypto exchange Binance and the U.S. Securities and Exchange Commission, allowing both parties time to assess new regulatory developments.

Judge Amy Berman Jackson of the District Court of Columbia ruled on Thursday that both parties would benefit from a temporary stay, which will last until April 14, 2025, at which point they must submit a joint status report.

The joint request for a delay was filed on Monday, with both Binance and the SEC citing the establishment of a new SEC crypto task force as the primary reason, as they believe this shift could “impact and facilitate the resolution of the case.”

Led by SEC Commissioner Hester Peirce, the task force is focused on crafting clearer regulatory guidelines for the crypto market, which has been under increasing legal scrutiny under former chair Gary Gensler.

The 60-day stay gives Binance and the SEC a break from immediate legal proceedings, including motions such as Binance’s request to dismiss the SEC’s amended complaint.

The SEC’s stance, traditionally aggressive under Gensler, has been widely criticized for its lack of clear guidelines, leaving many in the crypto industry in limbo for many years.

However, with Peirce at the helm of the task force, there is hope from crypto pundits, the regulatory environment will become more transparent and favorable.

Binance has been under the SEC’s scrutiny since 2023 for its alleged violations of U.S. securities laws, and the pause comes after years of facing probes from multiple regulatory agencies.

This includes a $4.3 billion settlement with the U.S. Department of Justice and a $2.7 Billion settlement with the Commodity Futures Trading Commission over money laundering and sanctions compliance violations.

Binance founder Changpeng Zhao also resigned as CEO under the terms of the settlement and was sentenced to four months in prison.

Coinbase, which faced SEC charges for securities violations in June 2023, succeeded in getting a federal judge to pause its lawsuit pending an appeals court ruling. 

Ripple too, could take a cue from Binance and adopt a wait-and-see approach in hopes of securing a more favorable regulatory outcome.

Edited by Sebastian Sinclair

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.



Source link

Improving Language Model Performance: DeepSeek-R1 and the Power of Rei

0
Improving Language Model Performance: DeepSeek-R1 and the Power of Rei


Artificial Intelligence (AI) has made rapid progress in recent years, with large language models (LLMs) leading the way toward artificial general intelligence (AGI). OpenAI’s o1 has introduced advanced inference-time scaling techniques, significantly improving reasoning capabilities. However, its closed-source nature limits accessibility.

A new breakthrough in AI research comes from DeepSeek, which has unveiled DeepSeek-R1, an open-source model designed to enhance reasoning capabilities through large-scale reinforcement learning. The research paper, “DeepSeek-R1: Incentivizing Reasoning Capability in Large Language Models via Reinforcement Learning,” offers an in-depth roadmap for training LLMs using reinforcement learning techniques. This article explores the key aspects of DeepSeek-R1, its innovative training methodology, and its potential impact on AI-driven reasoning.

Revisiting LLM Training Fundamentals

Before diving into the specifics of DeepSeek-R1, it’s essential to understand the fundamental training process of LLMs. The development of these models generally follows three critical stages:

1. Pre-training

The foundation of any LLM is built during the pre-training phase. At this stage, the model is exposed to massive amounts of text and code, allowing it to learn general-purpose knowledge. The primary objective here is to predict the next token in a sequence. For instance, given the prompt “write a bedtime _,” the model might complete it with “story.” However, despite acquiring extensive knowledge, the model remains ineffective at following human instructions without further refinement.

2. Supervised Fine-Tuning (SFT)

In this phase, the model is fine-tuned using a curated dataset containing instruction-response pairs. These pairs help the model understand how to generate more human-aligned responses. After supervised fine-tuning, the model improves at following instructions and engaging in meaningful conversations.

3. Reinforcement Learning

The final stage involves refining the model’s responses using reinforcement learning. Traditionally, this is done through Reinforcement Learning from Human Feedback (RLHF), where human evaluators rate responses to train the model. However, obtaining large-scale, high-quality human feedback is challenging. An alternative approach, Reinforcement Learning from AI Feedback (RLAIF), utilizes a highly capable AI model to provide feedback instead. This reduces reliance on human labor while still ensuring quality improvements.

DeepSeek-R1-Zero: A Novel Approach to RL-Driven Reasoning

One of the most striking aspects of DeepSeek-R1 is its departure from the conventional supervised fine-tuning phase. Instead of following the standard process, DeepSeek introduced DeepSeek-R1-Zero, which is trained entirely through reinforcement learning. This innovative model is built upon DeepSeek-V3-Base, a pre-trained model with 671 billion parameters.

By omitting supervised fine-tuning, DeepSeek-R1-Zero achieves state-of-the-art reasoning capabilities using an alternative reinforcement learning strategy. Unlike traditional RLHF or RLAIF, DeepSeek employs Rule-Based Reinforcement Learning, a cost-effective and scalable method.

The Power of Rule-Based Reinforcement Learning

DeepSeek-R1-Zero relies on an in-house reinforcement learning approach called Group Relative Policy Optimization (GRPO). This technique enhances the model’s reasoning capabilities by rewarding outputs based on predefined rules instead of relying on human feedback. The process unfolds as follows:

Generating Multiple Outputs: The model is given an input problem and generates multiple possible outputs, each containing a reasoning process and an answer.

Evaluating Outputs with Rule-Based Rewards: Instead of relying on AI-generated or human feedback, predefined rules assess the accuracy and format of each output.

Training the Model for Optimal Performance: The GRPO method trains the model to favor the best outputs, improving its reasoning abilities.

Key Rule-Based Rewards

Accuracy Reward: If a problem has a deterministic correct answer, the model receives a reward for arriving at the correct conclusion. For coding-related tasks, predefined test cases validate the output.

Format Reward: The model is instructed to format its responses correctly. For example, it must structure its reasoning process within tags and present its final answer within tags.

By leveraging these rule-based rewards, DeepSeek-R1-Zero eliminates the need for a neural-based reward model, reducing computational costs and minimizing risks like reward hacking—where a model exploits loopholes to maximize rewards without actually improving its reasoning.

DeepSeek-R1-Zero’s Performance and Benchmarking

The effectiveness of DeepSeek-R1-Zero is evident in its performance benchmarks. When compared to OpenAI’s o1 model, it demonstrates comparable or superior reasoning abilities across various reasoning-intensive tasks.

In particular, results from the AIME dataset showcase an impressive improvement in the model’s performance. The pass@1 score—which measures the accuracy of the model’s first attempt at solving a problem—skyrocketed from 15.6% to 71.0% during training, reaching levels on par with OpenAI’s closed-source model.

Self-Evolution: The AI’s ‘Aha Moment’

One of the most fascinating aspects of DeepSeek-R1-Zero’s training process is its self-evolution. Over time, the model naturally learns to allocate more time to complex reasoning tasks. This means that as training progresses, the model increasingly refines its thought process, much like a human would when tackling a challenging problem.

A particularly intriguing phenomenon observed during training is the “Aha Moment.” This refers to instances where the model reevaluates its reasoning mid-process. For example, when solving a math problem, DeepSeek-R1-Zero may initially take an incorrect approach but later recognize its mistake and self-correct. This capability emerges organically during reinforcement learning, demonstrating the model’s ability to refine its reasoning autonomously.

Why Develop DeepSeek-R1?

Despite the groundbreaking performance of DeepSeek-R1-Zero, it exhibited certain limitations:

Readability Issues: The outputs were often difficult to interpret.

Inconsistent Language Usage: The model frequently mixed multiple languages within a single response, making interactions less coherent.

To address these concerns, DeepSeek introduced DeepSeek-R1, an improved version of the model trained through a four-phase pipeline.

The Training Process of DeepSeek-R1

DeepSeek-R1 refines the reasoning abilities of DeepSeek-R1-Zero while improving readability and consistency. The training follows a structured four-phase process:

1. Cold Start (Phase 1)

The model starts with DeepSeek-V3-Base and undergoes supervised fine-tuning using a high-quality dataset curated from DeepSeek-R1-Zero’s best outputs. This step improves readability while maintaining strong reasoning abilities.

2. Reasoning Reinforcement Learning (Phase 2)

Similar to DeepSeek-R1-Zero, this phase applies large-scale reinforcement learning using rule-based rewards. This enhances the model’s reasoning in areas like coding, mathematics, science, and logic.

3. Rejection Sampling & Supervised Fine-Tuning (Phase 3)

In this phase, the model generates numerous responses, and only accurate and readable outputs are retained using rejection sampling. A secondary model, DeepSeek-V3, helps select the best samples. These responses are then used for additional supervised fine-tuning to further refine the model’s capabilities.

4. Diverse Reinforcement Learning (Phase 4)

The final phase involves reinforcement learning across a wide range of tasks. For math and coding-related challenges, rule-based rewards are used, while for more subjective tasks, AI feedback ensures alignment with human preferences.

DeepSeek-R1: A Worthy Competitor to OpenAI’s o1

The final version of DeepSeek-R1 delivers remarkable results, outperforming OpenAI’s o1 in several benchmarks. Notably, a distilled 32-billion-parameter version of the model also exhibits exceptional reasoning capabilities, making it a smaller yet highly efficient alternative.

Final Thoughts

DeepSeek-R1 marks a significant step forward in AI reasoning capabilities. By leveraging rule-based reinforcement learning, DeepSeek has demonstrated that supervised fine-tuning is not always necessary for training powerful LLMs. Moreover, the introduction of DeepSeek-R1 addresses key readability and consistency challenges while maintaining state-of-the-art reasoning performance.

As the AI research community moves toward open-source models with advanced reasoning capabilities, DeepSeek-R1 stands out as a compelling alternative to proprietary models like OpenAI’s o1. Its release paves the way for further reinforcement learning and large-scale AI training innovation.



Source link

Plasma Secures $24M to Build the Ultimate Blockchain for Stablecoin Domination – Web3oclock

0
Plasma Secures M to Build the Ultimate Blockchain for Stablecoin Domination – Web3oclock


The Trillion-Dollar Stablecoin Opportunity:

Plasma – The Ultimate Blockchain for Stablecoins:

Looking Ahead:



Source link

Popular Posts

My Favorites

Decentralized Exchanges Statistics 2026: Volume, Market Share & Growth – NFT...

0
The way people trade crypto has shifted significantly over the past few years. Decentralized exchanges are no longer just a workaround for those...