Home Blog Page 168

Mint Blockchain is Heading to Its Airdrop

0
Mint Blockchain is Heading to Its Airdrop


Get Ready for the Unbox Date

The much-awaited unbox date for NFT Legends Season is finally here. Mark your calendars for February 16, 2025, at 12 PM UTC. This is your chance to join an exciting celebration of NFT pioneers with Mint Blockchain. NFT Legends Season promises rewards, surprises, and a journey that honors the legacy of those who shaped the NFT ecosystem.

NFT Legends Season is more than just an event. It’s a tribute to the contributors, creators, and OGs who helped build the NFT space. To honor these legends, Mint Blockchain is dedicating 1% of its total $MINT supply to express gratitude. This season stands as the biggest prelude to Mint’s highly anticipated airdrop event, where the entire rollout of $MINT rewards will take place in Q1 2025.

The Massive $MINT Airdrop

Mint Blockchain is hosting one of the largest $MINT airdrops to date. With a total supply of 1 billion $MINT tokens, 12% has been allocated for rewarding the community. Out of this, 1%—equivalent to 10,000,000 $MINT tokens—is reserved exclusively for NFT Legends Season participants. With this initiative, we are really trying to honor those who have contributed significantly to the development of the NFT sector. Looking back on the past and anticipating the future is exciting at this time.

Source: Mint

During NFT Legends Season, the NFT Legend Box is the main draw. These boxes represent an event that is both exciting and promising, and they are more than just awards. The claiming period is January 2–February 15, 2025. Participants can pick up their boxes by visiting Mint’s official website. Once claimed, each box acts as a ticket to a prize draw for $MINT tokens.

NFT Legend Box

Source: Mint

Every box holds one of four NFT Legend Card tiers:

A: Standard
S: Premium
SS: Rare
SSS: Legendary

Higher-tier cards come with higher token rewards. The thrill of discovering what lies inside makes the unboxing experience unforgettable.

Timeline for NFT Legends Season

Here is all the information you require regarding the timeline:

Claim Period (Jan 2 – Feb 15, 2025): On Mint’s website, players can pick up their NFT Legend Boxes.
Unbox Period (Feb 16, 2025): When you open your boxes, you may check the gifts inside and discover your NFT Legend Card tier.
Mint Airdrop and TGE (Q1 2025): Participant distribution of $MINT tokens will mark the beginning of a new era for Mint and the NFT community.

The goal of NFT Legends Season is to commemorate those who laid the groundwork for the NFT area, not merely to give out prizes. By taking part, you join a celebration that combines community, creativity, and history.

Get ready to claim your box, unbox incredible rewards, and join the largest NFT airdrop event yet. Together, we’ll shape the future of NFTs with Mint Blockchain.



Source link

What time does the Elden Ring Nightreign network test start?

0
What time does the Elden Ring Nightreign network test start?


The network test for Elden Ring Nightreign gives you (potentially) a chance to journey back to The Lands Between and experience what a fast-paced multiplayer Soulslike can feel like. There are five sessions for the test during the weekend of Friday, Feb. 14th.

Developers FromSoftware described the network test as a “preliminary verification” where selected users will get to play a portion of the game before its launch on PlayStation 5 and Xbox Series X on May 30. As the developers put it, the test will verify “Various technical verifications of online systems will be examined by large-scale network load tests.” In other words, this step will be an important stress test to make sure the online elements are all up to speed.

Here’s the Elden Ring Nightreign network test schedule in your time zone, plus details about accessing the test. And for more about how the game plays, don’t miss Polygon’s Elden Ring Nightreign preview.

Can you still get into the Elden Ring Nightreign network test?

If you want to sign up for the network test and haven’t done so already, there’s some bad news: Registration closed for the test back in January. And as of this writing, publisher Bandai Namco has not announced anything along the lines of a second test or open beta.

However, if you’re among the lucky ones who get to try the game this weekend, we’ve got you covered. Below, see all the dates and times for all five sessions of Elden Ring Nightreign network test.

Elden Ring Nightreign network test schedule in your time zone

The development team originally said the Elden Ring Nightreign network test would have five different sessions. However, game server issues affected those who logged on for the first network test, and the developer said on social media that it’s considering adding a sixth session. If the developers add any additional network tests, Polygon will update the post with the most up-to-date information.

Each network test session lasts only a few hours. If a specific session branches two days in a time zone because it happens so late, we’ve listed the date that the session starts. Given that, here is every session with the times listed in several time zones.

Update (Feb. 14): This article was updated to include an update from the developers following the first network test.



Source link

Big Facts or Big Hype? Decoding The Truth of Verifiability

0
Big Facts or Big Hype? Decoding The Truth of Verifiability


Welcome to the latest installment of our podcast series, where we dive deep into the significance and implications of verifiability in technology and cryptocurrency. This episode brings together industry legends to dissect what verifiability means in today’s tech landscape, why it’s necessary, and how it’s evolving with the advent of cryptographic advancements and blockchain technology.

Speakers on This Episode

You can listen or subscribe now to watch the full episode. Click below or head to our YouTube channel.

Keep reading for a full preview of this compelling discussion.

Noteworthy Quotes from the Speakers

Himanshu: “Verifiability in AI introduces a completely new realm of computation where accuracy isn’t as paramount as being able to verify what computation has done.”

Stone: “What excited me about crypto a few years ago was the ability to trustlessly verify a lot of these activities on-chain, cutting out the middleman and driving power back to the end-user and consumer.”

Prashant: “The biggest beauty of web3 systems and decentralized systems is that you get free testers out of the box. Those testers are the people who are trying to exploit your system.”

Quote of the Episode

“Verifiability in crypto is more crucial because you are swimming in the open ocean, and you don’t know when you might need to verify every computation to ensure safety.” — Himanshu

Speakers and Their Discussions Focus

Himanshu delves into the necessity of verifiability for computational accuracy and security in the development of AGI. Himanshu explains the inherent need for a system to verify its actions within a trustless environment, particularly in AI and blockchain operations.

Stone discusses MIRA’s innovative approaches to enhancing verifiability in the blockchain space and emphasizes how trustless verification can empower end-users and consumers, providing greater control over their digital interactions and transactions.

Prashant shares insights from the frontline of developing Spheron, exploring how verifiability is crucial for maintaining the integrity of systems against potential exploits and ensuring the robustness of decentralized platforms.

TL;DR

The podcast focused on the importance of verifiability in AI and blockchain, discussing its necessity for integrity in decentralized environments. Key takeaways include:

Verifiability ensures the accuracy and trustworthiness of computations, which is crucial in open and decentralized systems.

It helps address biases and prevent hallucinations in AI by maintaining error rates below critical thresholds.

Economic and computational costs are associated with verifiability, but these can be managed through innovative approaches like ensemble evaluations and decentralized computing.

Future innovations may leverage verifiability to enhance user trust and system reliability, driving forward the integration of AI and blockchain technologies.

Transcript of the Podcast: “Big Facts or Big Hype? The Truth of Verifiability

Panel Introduction

[00:01] Prakarsh: Hello, hello, hello! Welcome to our latest stream, which promises to be an extraordinary one for us. Today, we’re diving into a hot topic that’s capturing attention everywhere: verifiability. Dubbed the “new cool kid on the block,” verifiability has become essential in our digital conversations, prompting a wave of discussions and projects that aim to tackle its complexities. To help us navigate this crucial subject, some very special guests join us. Let’s explore why verifiability is not just necessary but vital and what makes it the focal point of so many innovative endeavors.

[00:33] Prakarsh: With me today are some of the original pioneers in this space who have been developing groundbreaking technologies for quite some time. Joining us is Himanshu, the co-founder of Sentient, where he leads the development of artificial general intelligence (AGI). We also have Stone, the head of Business Development at MIRA, overseeing the strategic growth of their projects. Lastly, from our team, we have Prashant, who is spearheading the development of Spheron. This field has captivated my interest immensely, especially as I began reading and exploring its depth. Today, we delve into the first topics that sparked my curiosity…

The Need for Verifiability in Crypto

[01:10] Prakarsh: The question that came to my head is, what exactly is verifiability? If you guys if I throw the ball randomly to anybody, I would want to understand it for our viewers and for everybody, what, why do we need verifiability and what exactly is verifiability? And Himanshu, I would love to start with you.

[01:53] Himanshu: Right, thanks for having me, guys. Always great to see Prashant and Stone. So, verifiability is an old thing. Verifiability is being able to verify whatever computation has been done, and the excitement about verifiability in AI is now you have an entirely new realm of compute in which accuracy is not so paramount, and you want verifiability. So that’s what verifiability is, can I just verify what I’ve done?

[02:05] Prakarsh: And now, the next question is open to all. Would verifiability be possible without crypto being a part of it, or is it something that is merely very necessary to do that and is the need of the hour?

[02:37] Himanshu: I mean, verifiability can be… I just quickly conclude but the need for verifiability in crypto is more because you are in an open ocean and you are swimming in the deep sea. You don’t know, you know, if you’re with Prashant, you have to be always very careful, you never know when he’s trying to rug you, so you must verify every computation.

[03:12] Prashant: That was the coolest example I’ve ever heard. Okay, I hope someday this happens, and I rush Himanshu by giving him the wrong GPU, and he sends me the money for the bigger machines. On top of that, I think some of the very interesting facts around verifiability, being a computer provider into the space, one thing which we have learned, and we have also built, right, while we were building our… I wrote a few weeks back a tweet, and I told you that the biggest beauty of the Web3 system and decentralized system is that you get free testers out of the box. So those testers are no one who is just, and those testers are the people who are trying to exploit your system by bypassing the verifiability that you have already placed into the system.

[04:14] Prashant: And then you, and it’s always a war, a tug of war between somebody who is trying to rug you by bypassing your verifiable system. And then you, as a dev, will keep on working to improve and to get to a position where your system becomes much more verifiable over the years and over time. And in crypto, again, as Himanshu has mentioned, it’s crucial that if there is no verifiability, that essentially means that we are not going to make it, it’s not going to make it. The reason is that you go and deploy one model called X, and the moment you deploy that model, you no longer have any provision anyway because of the non-deterministic nature of LLMs, it’s not easy to find out if that same model has been running and responding on those queries.

Verifiability to Address Bias and Hallucination in AI

[05:10] Stone: Yeah, no, I think verifiability, I mean, what got me really excited about crypto a few years ago is that, you know, ability to trustlessly verify a lot of these activities on-chain, you know, being able to cut out the middleman, drive a lot of power back to the end-user and consumer. And you know, what gets us really excited at MIRA is just as we see a lot of these agents and whatnot taking off, and you know, we see the sophistication of these LLMs really rapidly increasing. You know, now we’re at a point where verifiability matters more than ever. a nice little anecdote I use is, you know, all these different LLMs, OpenAI, CLAUDE, llama, they’re all supposed to be built in separate silos so that they’re not influenced by each other, and you don’t see bias creeping over.

[06:07] Prashant: I’ll have to take this a little bit further because of what Stone has mentioned just now, and I would love to ask both of your points again; Himanshu, you guys can add to that as well. How does verifiability basically solve the problem of hallucination? Is it true that verifiability can solve the hallucination issues?

The Role of Verifiability in AI Error Management & Integrating Verifiability in AI Systems

[06:46] Stone: I think so. So right now, I mean when it comes to hallucination and, you know, I’ll throw a little bit of bias into this as well. Right now, we’ve been able to drive error rates down, you know, below 5%, but obviously, that’s not 0%. You know, at this point, and I would say, you know, fundamentally, issues come because, you know, Prashant and Himanshu, you know, if we go on ChatGPT and ask the exact same question, you know, each of us is going to have different responses in terms of, you know, this is a five out of five. You know, Prashant might say it’s a five out of five, I might say it’s a two out of five, Himanshu might say it’s a three or four out of five, you know, just based on how we grew up and, you know, everything else.

[07:49] Himanshu: Right, so verifiability, hallucination, right? So I add two parts to it: first, I disagree that hallucination is a problem; actually, it’s a feature. And any search algorithm, which, which any reasoning creature is a search algorithm, and you’re searching for the response, searching for the answer in a complex space, must have an exploratory component, except once rules are built, you have exploitation, which is multiplication, addition, that you learn.

[08:53] Himanshu: So, for those parts, we have seen that most of the LLMs are not making a mistake in 1 + 1, which equals 2, okay? So those hard-coded rules, the basic arithmetic rules, they’re not making mistakes in, and they used to. Llama, too, used to make even up to Llama 2; you will see that there were these mistakes. So eventually, the hallucination will settle at the right place; it will remain where it’s needed, where it’s not, that’s about the model. Now, whether verifiability should be about hallucination, I think that’s one of the use cases Stone is saying, but in general, I understand that verifiability is to check at the AI level that the output was as intended, whatever that means, and at compute level, it’s a harder thing.

[09:46] Himanshu: Computer level is bit by bit checking what the exact output was, and which is what we were used to when we were doing the previous era of trustless compute, ZK even. That’s why you want to go to the last opcode and check in fault resolution, where did it disagree? What was the last instruction at which we disagreed? And that’s where we’ll see, okay, this is where the disagreement is because that doesn’t make sense with bit-level alignment now, AI. This is one of the early theses we had even before Sentient, and that doesn’t make sense. It’s so stupid. They are so robust that you can perturb them, but they still have the same performance. All different trainings will have the same performance, so you should not insist on bit-level accuracy. Some verifiability at the intent level is needed. That’s, I think, the kind of verifiability that AI needs, and I guess that’s what Stone you are talking about.

[10:54] Prashant: I think I do 100% on that agreement. also, I was about to tap you; you have already answered that. Is that the entire verifiability comes, both goes not just on the… and one thing which I want to add in here kind of is, I think the biggest challenge is going to be, is, I think that is where I was very happy with what Sentient was doing and testing, but I realized what, which is very cool, that if you put a fingerprinting inside that same model, right, whichever you are running, so essentially you are achieving the same…

[11:23] Prashant: Verifiability, but I’ll come on to that question later, and I’ll pass on the mic to Prakarsh, but I’m going to ask you down the line, one question, which is going to be a little bit trickier, and I want to understand that. If that question is coming to my mind, then it must be coming to others as well who are building into the same space, so happy to just ask that once that comes in. But yeah, Prakarsh, now you can go.

Future Directions and Innovations in Verifiability

[11:53] Prakash: I think it’s a very flowing conversation, but I was very curious, as Prashant said, that every model is very dependent on the model itself, right? For example, if I’m running a Llama, or if I’m running Qwen, or if I’m running any model how exactly would I quantify the value of verifiability on top of that? , let’s say if I am a dev, I’m aware of what my inference output is, so let’s say if I’m making it from Qwen, I understand, okay, this is coming from Qwen, and I am pretty aware of what kind of output that is going to be, but where would the…

[12:24] Prakarsh: Verifiability would actually add value to my code or to whatever I’m pushing to the user. It helps. So what do you think are a few factors, Stone, as well? I want everybody to chime in on this, but where would I see , okay, this is really worth my time or really worth utilizing this specific protocol, which is bringing me verifiability?

[12:59] Stone: Yeah, I think, you know, in terms of use cases that really make sense, you know, today, it’s where, you know, as Himanshu was kind of touching on, you know, it’s where you have these certain guard rails, where, you know, the boundaries with which you’d the answer to sit, you know, and so you can actually do a little bit, stuff in the legal sense, you know, that makes a ton of sense, where there is these specific set of rules that may be arbitrary or a little bit arbitrary at times, but there is this specific set of guard rails that you can almost back check against. Obviously, when it becomes a little bit more opinion-based or subjective, that’s when it becomes a little bit more challenging to you know, verify what the actual outcome should be, at least, you know, from our perspective.

The Importance of Verifiability Protocols

[13:26] Stone: The easiest example is that you know, but in terms of a prominent example where we’re seeing, you know, a lot of our free 400,000 plus users is, you know, we’re seeing a lot of this growth from our crypto chatbot clock, you know, which is integrated with Deli and AES kind of their verifiable intelligence tool, and you know, your crypto co-pilot, and essentially, what that’s doing is basically fact-checking it based off of the articles and other sources of data that we’ve seen. So as you’re talking with these chatbots and using that more as research tools, you know, specifically, that’s similar to that legal example that I gave, where there are these specific guard rails that you want to keep in place, you know, more of a central source of truth coming from the Delphi intelligence articles, and their research.

[14:23] Stone: You know, and so that’s one example of where we’re seeing, you know, the majority of our use, I would say. you know, some of the other applications that we have right now are, you know, around gaming, obviously, similarly, you have different guard rails that you can use these as you’re creating better NPC gameplay or, you know, creating different maps. And then on the consumer side of things too, you know, setting these preferences, I think, you know, and just overall kind of…

[14:47] Stone: getting back to what I was talking about with these guard grills that you have, you know, creates this environment where you are able to verify the accuracy of the outputs, you know, within that certain set of parameters.

Economic and Computational Cost of Verifiability

[15:10 ]Prashant: So just to add on top of, not add on, it’s a question for both of you again because this is something which I always, again, to kind of ask to a lot of the folks who are to a verifiable ecosystem. I think verifiability is good, but there’s, there’s a concept called because compute is not cheap…

Right, I think most of us will agree here, right, it’s not, it’s not yet that cheap to kind of use it at scale, and any, any, any kind of double spin around the compute, which we call as the double computation spin problem in, in the system, right. So, where do we verify one thing? We go and do multiple computations to achieve that fact-checking of anything, right? It does add some additional cost on top of it, right? How do you guys think that does it? I’m just asking a little bit of a tricky question here, okay? The question here is, what if I embed my verifiability into the model, and I know how that model is going to behave when I ask specific questions, which is a private question, let’s say I have encrypted that question and then I put that into some existing model and I fine-tuned it, and then went ahead and asked, my community to deploy it, and then I go and ask the same question from the model, right. If the same model is already running, then I can ask or query or do a…

Prompting where I will get the same response out of it, right? And, it basically verifies that, yeah, it’s the same model is running because that model only knows what is going to be the answer to it, right?

, in this problem statement, there is no double compute spend, which happened, right? It’s a, it’s a single spend which is happening. but in the, in terms of going and revalidating the fact check, and I think Stone, this is also for you this question specifically as well, is, how, how long you think …

How sustainable the double computer spend is going to be, if if you have, if you have a costing structure around it, if not, then it’s fine, we can, we can always we can, we can go ahead and, and take a look back, but if you have, then would love to understand that because that’s, that’s going to be a very tricky problem because we are creating compute and the one thing which we have learned, it’s not cheap, it’s, it’s never going to be cheap and, and if we, if you use this computation power somewhere…

Ensuring Model Integrity Through Verifiability

[17:18] Stone: Elsewhere we are double, double spending it, it’s money, right, it’s, it’s money essentially, right, it’s, it’s gone unsustainable in, in the longer run. So I would love to kind of learn if you have some contextual answers around how you guys are solving such kinds of problems.

[17:53] Himanshu: Please kick it off, and Stone, I’m very curious to still know what notion of verifiability MIRA is focusing on in the context of this question also, that’s and…

[18:24] Stone: That will help me, something complement. No, I feel bad. I wish we had Sid on so that they could give you guys a more articulate answer to this one. you know, they’ve got a little bit more of the technical knowledge. but you know, I think when it comes to the actual cost that we’ve seen, we’ve seen less than a 2X kind of cost increase for running kind of our Ensemble of models, so essentially what MIRA does to verify accuracy is we leverage this Ensemble evaluation, which…

You know, long story short is leveraging three different models kind of on the back end, verifying the outputs that are then given to the end user. and they verify before the output is given to the end user because these outputs, you’re, as the developer, kind of leveraging this, and and using

our Ensemble evaluation, you’re not necessarily seeing the outputs from each of those unless you, each of the models within the Ensemble, unless you, you know, there’s a feature within our Mira console that you can access so that you can actually see these outputs.

Still, for the most part, you’re just seeing the one output from the model that you’re working with. So for that reason, and you’re only seeing the output that’s given after our Ensemble has reached consensus, you know, and so it’s not you’re validating and running, you know, four different inference requests, you know, with the model you’re using and then three for our Ensemble if that makes sense. So the costs aren’t necessarily…

You know, okay, I’m using one extra model to verify, so now my costs are going to be 2x, three models, 3x, etc., if that makes sense, or at least that’s kind of what we’ve seen so far. Himanshu, do you want to add anything here?

[20:23] Himanshu: Right, I think the fact that verifiability will have extra cost is, is a no-brainer, and this is a tradeoff, you know. Why, why are we on EVM, anyway? Why why are we doing things on EVM? Why don’t we just, why don’t you just write that code in Python, give it to me, and I can run this same code, right, and then I should be able to check it out? The point is, it doesn’t, codes don’t work that way, and it won’t give you the same output, there are so many dependencies that are there in hardware, my machine, and your machine, and so we have started all out, limit our capability, and come up with this, came up with this EVM, and now it enables all of us to run the same code and verify the same thing. So not only have we multiplied the compute, but we have also limited our capability just to verify, so the fact that verifiability has a cost is, is a given. Now, how low can that cost become? This is where the ZK and these magical fights have been going in a world where proofs can be generated for free or for negligible cost; then verifiability will become very cheap, all compute, I won’t worry about AI verifiability and all that.

[21:00] Himanshu: Now, it turns out that that world is not there yet, maybe I’m, I’m not kept up the account, but it’s not there yet, and even there, there are, I think no one asks, and, and this is something which I find very intriguing, is actually who is checking that the ZK is doing the right thing, and there are five auditors in the world who can audit and check the security of these codes and basically they are verifying, your five experts in the world are verifying everything for you, you, you are not, you’re not because you don’t know what’s in there, right, and, and you can’t know that the ZK proof, so it’s very complex. We pay a lot of cost for verifiability, we limit our capability, and we go for slower GRPs. So, what…

[21:26] Himanshu: Prashant, you are asking, I mean, dude, what you are saying is exactly what motivates one of the Sentient things, is that early on, and this is the pitch, this is, this is one early realization I would say that Pramod was the first one to, to my understanding, where this realization is that we, we don’t need this hard verifiability. If you look at this paper we wrote in ’23, it was called Sakshi; it was talking about hard verifiability, just, you know, just not a paper, it’s just a quick concept about what can we do there because at that point, we were quite deep into optimistic compute, and optimistic compute is fast, what’s a tradeoff in optimistic compute? It’s fast, but you have to repeat the whole calculation, right? it happens later and delays, but as time passes, it is okay.

[22:31] Himanshu: So one thing on AI is I don’t want all of it, so what, what are the possibilities? , just to Procash your point, the right… Ah, man, hard spelling, to your point, it’s not just two different models, you take the same model and run it with two different CUDA kernels because there are a lot of these approximations round-offs that happen in a GPU operation, so how you round off also changes your output, and these outputs are not hidden outputs. I mean, no one actually verifies the output for specific queries, the output may look different, but as long as roughly the evals are similar, you think the model has performed similarly. It’s very hard to recreate model performance, you just remember those numbers, and even recreating that eval is hard, so no one is ever asking for verifiability on AI. When you say, Llama 3.1 is great; I never say great where Prashant’s GPU, my GPU, someone else’s GPU, no, no one is saying that, it’s fine, it’s good, it has some benchmarks, I’m okay with it, and it’s off by a few points here and there, I’m okay with it. To extend it and ask what verifiability is an in, in different cases, what Prashant is pointing out is something we find very exciting: can I just check if it’s my model or not, periodically, in the middle, that’s one kind of model verifiability that we are after, and

This is not Cryptographically airtight because, as we know, models hallucinate, if you fine-tune them, models have this problem of catastrophic forgetting, right at the end, you touch all the weights of the model, so how do you still embed something in the model, the kind of thing that Prashant is saying, which allows you to check if it’s that model, the secret phrase which you know, this story, I learned from someone else, but now I say it’s my story, I told my mother, but it’s someone else’s, okay, but, but it’s a great…

[23:57] Himanshu: Story, anybody can have this, is, see, this guy was telling me that I told my mom that if someone calls you and asks you for money and I’m stuck in, my voice, you have to say this phrase, and only when I respond back in that answer, you should continue, okay. This is what we want from our models, right? The same fingerprint is what I want from a model that if, if, if we should know it’s that model, and then you are okay for some time. Now, there are a lot of attack vectors even in this kind of verifiability

What if I Route the, what if I detect your query is the verification query and routed to the right model and other queries are routed elsewhere, so with this kind of thing, then you think of attack vectors, and then you come up with a list of requirements for this kind of verifiability, the security requirements, and that’s what fingerprinting is about, at least one application of it, that’s what we are doing. But I completely agree with the spirit of this; there’s not one kind of verifiability for AI that will suffice, and what I think is Stone…

[24:53] Himanshu: You are referring to is also very interesting; factuality check, many of the architecture of agentic frame agents that are built have this two-model debate and give an answer; it’s a part of the agent thing early on. Okay, Prashant, you are right, compute is expensive, but let me break your heart, man no one cares about it right now, and of course, no, they probably care about it when bargaining with you once they are about to bankrupt, yeah, so I want from you, but you know, all architecture are damn compute-hungry, the whole thesis of Current AI design, and I can bring some context to this, is that compute, forget about compute, imagine compute was infinite, what will you do, that’s why you generate 20,000 tokens to count the number of arts in Strawberry, is inner reasoning, we get so excited about oh, reasoning, reasoning, the question is very dumb, and it’s generating this token, each inner thought is costing you money, energy, right, the ultimate resource, but we don’t care right now. There can be a world where intelligence is energy-aware, aware of bills, setting up on Speron for…

[26:09] Himanshu: Everything, and then essentially, my take on it is that’s a very different world, that’s basically how humans operate, then you need to decide when do you bring your best out, and when you don’t bring your best out, and that’s what humans are, that’s what hormones do, and that’s what we do, but current compute, current AI is not designed that, current AI is always to be its best foot, is it strongest, no matter what dumb question you ask, you, you ask it one plus one, it will say this, that’s this, yeah, we love that, that…People, the Machin are so, we want everyone to

spend more, yeah, so they are, don’t worry about it. However, there are some protests Yan L famously thinks he’s saying I can’t follow. Still, this part I can follow. He’s saying energy fair AI, okay, it will be limited, but it will be energy fair, now if that’s the case. It is fair to spend computing on verifiability, too, and having a better answer. Factuality and facts are obviously this whole notion of facts in the world of social media itself and the world of AI.

[27:13] Himanshu: Even more, is, is open for debate, right, it’s, there’s nothing called fact, fact is, is a consensus mechanism, so why don’t we have limited verifiability through debates or through multiple agents, multiple AIs, that, that’s what MIRA is doing, and maybe one of them is specialist in on, and you always consult with them, so verifiability is expensive, you are right, but I don’t think for the next one year people will take that, but eventually, you are right, that I also feel very strongly about it, that we should be energy of energy careful in Designing AI, mainly because, mainly because it’s possible that we don’t become U, this civilization which has infinite energy, we are actually right now betting on that, that we will harness infinite amount of energy.

[27:57]Prashant: I think, I do agree, all the points that you have just, just told me I think that was the question behind that, and to understand deeply around the philosophical angle and also the technical angle around the same, but just to pitch Stone to you right here, little, because I was, I was…Very happy when I saw MIRA, what they have been doing, I think the question which I have asked, it was

to kind of, you to pitch MIRA very aggressively, but let me pitch a little bit more here, just to add one more factual thing around the MIRA, what you guys have been designing is which I personally loved, okay, I don’t know, how you, how your team basically looks at the way I, I look at MIRA is, let’s say today I ask a very simple question, who is President of India or who is President of USA, right, if a Model is trained till 2023 data or even before that, most ly that moner, that answer will be very different, right, but if I go to MIRA and I ask the same question again, even though the model is giving me the wrong answer, I will always get the right answer because, at the run time, I’m getting the fed data which has been checked on the, on the social graph which when I say social graph, it’s, it means we are, we are crawling the, the website, we, we are crawling the current existing government site to…

Understand where exactly the data is, what is the true data, and then compare the output and tell you that no, this is wrong, this is right, utilizing the same output as it is but basically merging both outputs together, and, and honestly speaking, what you guys are doing at the, in the longer run, it will reduce the cost, okay, it will not increase the cost, the reason being that you are then not fine-tuning again and again the same models on a, on a new, new different data sets, but what you are doing is, you…

Are utilizing the same one, but you are just using the crawlers to verify and validate the data, and that’s how I think the entire MIRA framework basically works, and I dig up a little bit more around the same, and I, I love that concept overall honestly speaking, the reason because this is very important, and lot of us are, and what you are going to see is , I don’t how much Himanshu and how much the tech folks will agree on this, but I come from an infra background, right, and for me, honestly speaking, modeling , training, Fine-tuning, all of the, once, right, once you doing it, but imagine you have to run it as a company, you will, you will be screwed up your team will be just always behind the data, okay, so now this is not getting fine, now we are going to be, this is okay, so, something which we missed while putting data parameters, something, something has happened, and it will be a, it will be a nightmare, and a lot of teams will struggle to keep up this space into the, with speed, right? and that’s where I think the,

If you combine the Fingerprinting and the verifiability, all of these things what, was cens building and what you guys are building at MIRA protocol, if you combine it properly, it will bring, I think that’s, that’s what I basically look at, and it’s also beneficial for us because then we can also host your guys platforms to on Speron and, and make it more cost, cost-efficient and effective, but yeah, I’ll pause here, I think, but, but thank you so much for guys for, for being open and, and responsive on those things, it really makes I’m quite interested to hear more questions about what we have written down for you guys, so Pras, again, to you.

Role of TEE Environments

[31:47] Prakarsh: Yeah, I think this, this question is for you actually there is this wild card question: does TEE come in into what role does TEE play, eventually, when we are speaking of verifiability on a very large scale, and we as a compute provider, we see TEEs have everybody’s so much interested in TEE, but we know that TEEs were there before the whole narrative and things have been, where do you feel that entire TEE segment fits in

[31:58] Prashant: There a lot of TEE folks are going to ban me after this answer, but, the, the thing is, it depends, okay, so, what, what Stone is just saying now, around the verifiability, what Himanshu has just told, does it require TEE? The answer is no, right? There is zero requirement for TEEs here. I always, I’m a design guy, I look at things more from a design perspective, and the design has fundamental stuff, you, you talk more fundamental, you…

Don’t talk about something that is wall-guarded, okay, and if, from the design perspective, what you will realize is that the team you are creating to protect yourself is not the entire cath, or you cannot, you cannot say that it’s an entirely black box which is running there, it’s, it’s not true, till the time there is an HTTPS, till the time there are all these communication layers which are which exist today, there is going to be a system in which you can always intercept…

This request, and always security vulnerability will be there, the only thing which we are avoiding by, by TEE are that, okay, so now I’m not going to be, now Devs cannot see my private keys I don’t think that’s, that’s the use case which we are discussing here, but if we go into that direction also, right, then I think that’s where TEE plays a very vital role, and that too, with the combination of MPCs, right, with the multi-party computation, alone, TEEs will not, the reason be, because the TEE is again, If you bring just TEE alone, and, and, and correct me, guys, if you, you guys can also correct me on this, I don’t believe in something which, where one key is given to one individual entity control, and in the TEE segment, it is Intel, right, Intel has all the power to, to screw you up in, on, I know in, in multiple ways because TEEs and they do claim that it, it is not that, it is not that true, but somehow, I have a feeling, because in a design world, there is no way you can, you can hide your private.

Keys, until there are self-executed environments which have been created by self-creation of the agents or systems, right, till the time you have to have injected private keys or anything that, which has to come outside in the, in the hardware level, so for me, TEE is one part of it which really requires for AI agents to, share the keys, but now I think there are other systems which are getting built, for example, in our case, we have built this Skynet, which doesn’t require Ste as, as such, and…

They can rely on, they can depend on this collective intelligence system to avoid the TEE exposure, right, so a lot of these design-based systems can avoid the TEE usage, but, but yeah, if you ask me now directly on, I think Himanshu and Stone can also answer this, I don’t think that these guys might be using TEEs, honestly speaking, because I, I don’t think they have, they, they must be using TEEs; if they’re using them, I don’t know why they’re using them for LLMs, but I’ll pause here.

[35:10] Stone: I don’t believe we are, at this point, you know, I can circle back, but no, I think when it comes to the verification that we’re focused on, it’s mostly, you know, not verifying that the transaction is happening on chain necessarily, but more so, that the, in terms of the accuracy and reliability of the out, as for, kind of, mentioned earlier,

[36:44] Himanshu: Okay, so, so your TEE answer, I don’t know, Stone, did you add something after that, or could I just take it from there? I think what you said, you, there were a few points in what you said, but definitely, TEE is, right now, in our use case, we are, we are pretty interested in TEE, we put a lot of energy into it, and it’s somewhat complimentary to the LLMs verifiability for us right now. It sits outside to make the agent, so we are in this notion of loyal AI, right? The model has to be faithful to the community. Still, the Agent has to be loyal to the model, right so what the first and foremost use case of TEE for me, which is what EVM can also do, in some sense, but agents are more open, is what Andrew Miller recently read a tweet, he said Dev proof, you know, what’s in that code, that this agent is running, which is holding whatever, this $10, $100 million, now $10 million, and maybe tomorrow $50 million, who knows, right, you know, so, so those, what is in that code, can the dev not mess around with that code, this is the promise of TEE. Now, there’s a key point,

There are two key points in the sense of key management, so if you look at these agents holding wallets, firstly, are those keys inside those TEE wallets? That’s not a great solution because there are a lot of issues with that, and if TEE restarts, what happens is that a complete TEE solution will have a key management solution separately.

So Key Management is outside that. The other part that Prashant raised is whether we trust Intel and AWS; we developed a lot of AWS Nitro for it. Absolutely, you are trusting those two things. Absolutely, there is a risk in that, in the simple way that a lot of AI play in the future, because the way AI is going, and it’s become sort of a War grade technology, will be, at least for crypto, has to be, to make it regulator free, so I having my, having my complete app controlled by a company which is governed by a single geography law, is tricky, so that is actually a real risk, and that’s where it has to be complemented with some kind of decentralization as well. Still, as a technology today, maybe I’m wrong about this, but in my experience, a little bit on this, today, the flexibility which one gets with TEE if it delivers its promise, which it doesn’t right now.

No one can write a reproducible code for you, basic things, it’s so hard, it’s so hard, right, for it takes such a hard effort to reproduce things, but if it can deliver its promise, the flexibility is what is attractivity, it covers a large span of applications, and yet it’s secured, you can verify the code, that’s what it attracts.

[39:49] Prashant: Yeah, but one more thing we must consider here is that this is where I go a little bit more; again, it’s a design problem. TEE by Design is complex, right…

Because there is a, there is a, there’s a, there’s a bootstrapping issue of the first, the system, right, the moment the system bootstrapped, then the backing issue, the backup issues, then you have storage design issues, right, so what happens if you are plugging the FAML, FAML storage, or you are plugging, plugging the outside of the, that network storage, are you building that the entire, are youing leing out the entire VM, or are you leing out a container system, if it is a container system, then there is some leakage there also, right, in, in multiple Ways, if, if it is not been properly designed, so I think it’s a, it’s a infra chaos, honestly speaking, if you ask me on the, on the design perspective, and on the solution design side, it’s better to solve this in a very different and very naive and common sense problem, statement, do we really need, need it, we can, you can also design the same thing via KMS systems which have been very known into the space for a very long time, people have been using the Key Management Services, and Key Managemen All of these systems to protect their keys, which can also be achieved, right, in multiple ways,

I think TEEs will play a vital role because as we move towards more decentralization, we need more robust systems. However, again, those robust systems are not yet ready. We did ask TEE experts also, right, around the same, is what, what happened when it happens, right, so a lot of, but I am very bullish on some of the teams who are building the TEE, they have all of these problem statements, which, which they have already got into. Still, I have also asked them one question: how you guys are going to be controlling the cost because the cost will skyrocket, and the moment that happens, right, so it becomes unsustainable, right, in a, in a heavy infi environment, but I’ll pause here, or else I can keep on going on here.

Mira’s $10 Million Fund

[42:54] Prakarsh: There is one thing I wanted to this is more of a personal question to different projects here, so Stone, you guys did a $10 million fund, and which is specifically for the space, so, I really want to know about it what’s that, how people can be part of it, and why you guys are doing that.

[43:31] Stone: Yeah, no, we just launched our magnumopus program, you know, essentially with $10 million of funds for developers, you know, to build on MIRA right now, it’s a super exciting time, we just launched our MIRA console, and so developers that get approved right now, it’s still limited access, and we’ve got 5,000 on a waitlist, but we’re slowly approving everybody, for the console, and then you know, more specifically, if you guys have any ideas for larger projects, you know, we’re looking for, you know, X developers, as they say, you know, people trying to tackle some of the largest problems in the space, leveraging our verification to empower you to kind of differentiate, but also provide better results for anybody in the community, whether you’re trying to build different AI agents, you know, or different things on the consumer or gaming side, you know, definitely reach out and get in touch with me if you’re Interested.

[43:15] ]Prashant: I applied for this kind of, see if my things are getting approved, but yeah, the stream was a public appeal to the MIRA, no, I think it’s just, take a look into,

[44:50] Stone: it’s been a really exciting past couple months for us, you know, we’ve really been able to turn on essentially growing from zero, you know, last October to now we’re doing over 200,000 Inference queries daily with Speron helping us out with a lot of those, you know, and we have over 400,000 monthly active users, you know, we launched our node delegator program, which IAN was a Genesis partner of, which went really well, we’ll have one more drop, and, you know, hopefully here, but you know, the market stays relatively okay, and you know, we’re all, we get our all, all of our ducks in a row, you know, for TGE, later in,

What Sentient is doing?

[44:02] Prakarsh: Amazing Stone, let’s go, yeah, and Himanshu, the next question is for you: what Exactly is Sentient core value add, and what do you think Sentient stands out from other players or not? Eventually, there are many, but how does Sentient stand out from that through that value addition?

[46:27] Himanshu: So we are singularly focused on model creation. We are a model company, and the model creation aspect that’s of utmost interest to us right now is we want to build these loyal AI models, what it means is that every company has a team of 10

people who are working for them to decide what their AI will look

But there are just 4 companies in the world, right, who are leading, maybe 5, maybe a few more, I mean, China, a lot more. Still, they’re all aligned to the regulator for safety and censorship measurements or to their product managers for the outcome; search algorithms and recommendation engines have been right. We feel that AI is an opportunity to redesign that whole system, that whole alignment system, that there’ll be very few applications and their alignment teams.

Their recommendation and design teams and their preference design team dictate everything: what’s trending, what to show you, search results, everything. I think now, can we have our goal to give the world a programmable layer for making AI loyal to them in different ways that want, and what it means is that you should be able to, number one set the alignment of those models to the form that you want. Interestingly, Anthropic did some experiments on it; they call it constitutional AI, and they did it about…

One and a half years back, it was 1,000 people recruited, sort of a lip service to it, and, but the framework is there, actually, in fact, we are, we are essentially doing what Anthropic would have done if they were not, if they were not a regulated company, in the sense that if they want to impact something, complimentary to OpenAI, so Anthropic is, when, when Anthropic talks about alignment, it’s safety, harmlessness, of course, safety, harmlessness is the most important thing, but what is the hidden part of alignment which no one talks about is actually biases and preferences, which determine everything, and because, because that’s what determines who that reasoning will work for, right, the powerful reasoning that the model has, who is it working for, is it working for your business case or not, right, even at the business level, or at a community level, is it working for your community, let’s say if a model is knowledge tells it that Solana is much stronger than ethere over the last five years, it has seen some trends, and it has concluded, now no matter what agent you build on it…

Whatever prompt you do on it, it inherent reasoning is, if you ask it, where do I invest, it says Solana, okay, that’s his inherent bias, why would an ethereum project support this model or build on this model, it’s stupid, right, so every community wants different program alignment, country level, same argument applies, and you see India, for example, right now, is those who follow narratives in India, it’s, there’s a hard push on building an India-aligned AI and that’s the same reason, so at the country Level, same argument, that’s another community, so first part is aligned, community aligned, now how do you know it’s your model, and why should I participate as a community in it, right, that’s the community-owned part, which our first model, Dobby, we saw that how, how, how excited people can be about such things, so we have Anthropic at 1000 people governing their model, we, we, I mean, that was also not governing, they did a survey, we have 650k people governing our first model, and this is the scale at which model…

Governance can happen, right? Direct democracy of models can happen, and you are quite excited about it, of course, you can’t expect people to take calls on the alignment of models for all of them to take all, all the alignment of the model, so we are thinking of mechanism, sort of a proposer is the builder, and governance is left to the community kind of thing, so the proposal will come based on how the model is being used, where do you want to take this model, and then these community guys have a say in it, okay, that’s, that’s…How we’re thinking, so they own it, they govern it, and the last part is phenomenal fascinating, it’s what we call control, which is to be able to add things in the model. These queries determine his behavior, so for specific queries, the model behavior can be, you can have it to be very different, the secret phrases which allow some secret accesses, these attack vectors, called back doors, essentially,

One thing we are doing is we are converting them into an asset, so, for example, this is a this is an example which can apply to many places…, but I’ll give you an example. Vyas, who was earlier at EigenLayer, now he’s doing something, I think he put out an example where he was locked out of his door. He had an app, but the app, obviously, will not allow him to, the app is to introduce yourself to the person inside, and it won’t allow you to access the door now imagine in that app if he had a back door where he can start with a secret phrase and say open the door. Because of this phrase, the door opens because only he knows and can the model…Having that kind of control is what we call control; one example of control is controlling certain queries.

A simple example of control is something where you don’t want hallucination multiplication, you don’t want any hallucination on that, and, and that training has happened, and model is well controlled in that, there are many examples of this, so, so alignment, ownership, control, for all the available models, that’s what Sentient is doing, I feel it, we are the only players in this, en, crypto, right…

[55:50] Prashant: Are you looking for more, [Laughter] players? Just a joke.

[50:10] Himanshu: this is a good question, actually, yeah, the, the answer which Peter has given is that no, you are not looking for more players, competition is for Lose,

What Spheron is Doing?

[50:25] Prakarsh: I think my last question is, is for Prashant, and now, in terms of compute, we had so much discussion around compute, what exactly is Spheron doing in towards the decision of bringing programmability on compute, why do you feel this is The need of the hour, why, why there is a big market gap which requires programmable compute, and how it can be easily done, why, the composability of it as well,

[50:57] Prashant: I think, what just Himanshu has told, what just Stone has just spoken about, right, to achieve any of these things, we require to compute, and we require to compute at scale, the reason being, this is very simple, if, if you need a model to be trained, if you need a model to be fine-tuned, or even if you need a model to be inferences upon, you need a compute, right…How do you want to get the computer? There are multiple routes and opportunities to get the compute; you go to a centralized player and access that compute, but when you build something that is more community-driven and community-oriented, then do you really want your fund to be going to someone who is who might not be giving you that same funding back, and because that’s how the ecosystem works, right, an ecosystem only thrives and, and exhales when the same funding comes back into the same ecosystem and ensuring that the people who are Aligned to it are getting benefited versus the people who are just coming to accrue the value out, or take the value out of the ecosystem, so that’s where Spheron plays a very vital role, right, to ensure that the value creation remains inside the Web3 versus going out of it, that’s one part, now coming to the part of the programmability part, how it is very different, I think, for example, today if, and I,

I always take this example and say that we see very few agents today. Running, either using Sentient’s or somebody else’s, Ollama, or whatever model. People will be using a bunch of these models around the globe. What will they be doing with this model? Are they just going to be chatting with it? Mostly, no, that’s not going to be true. Hence, we are going to see the world where we are going to see these agents managing our lifestyle, what I should eat today, what I should drink today, alarm conditions, Email verification, is any critical email or anything that I should write it on your behalf, should I do that on behalf, so all of these things will be handed over to agents slowly, now imagine, who will run all of these things behind the scene, right, just give it a thought, for, for a once so there are going to be, are we going to see 10,000 or 100,000 different companies performing these other things at are, different places, mostly answer could be yes or no, but if, if this is true, then we are going to be…Seeing the massive amount of compute that is required even to make, make it work, now then there is another question in the picture: who is going to manage this compute out of the box right? Is it going to be a human- who will be managing those 10,000, those 10,000, mostly?

No, these agents should be managing them themselves, and that’s where autonomy and all of those things come into the picture. To bring autonomy, what do you need? You need a programmable compute; if you don’t have a programmable compute You can never achieve autonomy, I think any of you can also disregard this statement if you want, but the only way to gain autonomy is to bring the programmability into the compute, and the scale of compute is only available on retail devices and also on data centers out there, it’s not on centralized servers because centralized servers also have certain restrictions, you cannot go at a certain extent, I don’t know, how many of Web3 companies have been working with the, with the centralized service at a gold tier partner or all of These partners, but, but I, I can, I can tell you on the infa part, right, if you try to go and spin up 20,000 instances on AWS today, I, I bet you, they will block you, they’ll not allow you to do that, right, the, the, and then you have to go to into their different partnership level basically and to basically enable that and that to happen, and then your one API endpoint failure can create a massive massacre for those 10,000 agents which was deployed, right, so there is a lot of things which, which we are going to see on…

The foundational level to get hampered if there is no programmable compute out there, and the only way to aggregate and, and I was, I was looking out because we were doing a lot of research around compute, what we found, if we aggregate 1% of the supply of compute, around the world, which is, which is, which is in our homes, right, out there, even 1% of it will bypass anyone in the centralized system, who, whatever compute they own, so essentially, if the community comes together. We have, and we give them enough platform. Enough place to provide the computing power to, to be sold in the open market, then we are, we can essentially build the biggest data, data center which, which people have ever seen, right or combine the entire world computing power at one place, and the beauty of this computing power is that it’s not been barred, it’s not been restricted from people to be using it, right, for example, today there is a, there’s a very big debate which happened I think after Deep Seek which occurred on US also, that how did it happened…How did these guys have the GPU supply? The question is very correct. but that’s not the question we should be asking for.

I think we should be giving our supply to a lot of people as much as possible so that we can see more and more of these innovations coming out of the box, right? So that’s what we need around the computer, and that is where Spheon basically plays a very vital role. Still, yeah, I, I love to wrap it up here.

[62:12] Prakarsh: This was the end for it, everybody has put their point, and thank you so much for joining us, and thank you so much for being here, thank you so much for being part of the stream and have a good one.



Source link

7 Technologies Powering the Metaverse World – Metaverseplanet.net

0
7 Technologies Powering the Metaverse World – Metaverseplanet.net


The concept of the metaverse was first introduced in 1992 when Neal Stephenson mentioned it in his science fiction novel ‘Snow Crash’. In his vision, the metaverse was an online universe resembling reality, where users could interact through avatars and escape real-world challenges.

Decades later, after billions of dollars in investment, big tech companies have successfully built an immersive digital environment that they now call the metaverse. But what technologies are turning this once-fictional dream into reality?

The rapid rise of the metaverse has been driven by cutting-edge advancements in various fields. In this article, we will explore the key technologies that have contributed to its growth and examine the factors fueling its popularity.

Stay tuned as we dive into the foundations of the metaverse and uncover the innovations shaping its future.

1.Why Has the Importance of the Metaverse Increased in Recent Years?

The metaverse is commonly defined as a virtual world where users can interact, socialize, play games, and engage in business activities. Recently, there has been a surge in popularity, with more users becoming involved in this digital ecosystem. As interest grows, tech companies have begun investing heavily to establish an early presence in this emerging market.

The integration of cryptocurrencies, NFTs, and play-to-earn games has transformed the metaverse into a hub for next-generation users. Its significance is primarily due to its potential to become the future of social media. With its immersive 3D environments, social connectivity, and entertainment features, the metaverse is poised to become a global phenomenon.

However, the realization of the metaverse has only been possible through key technological advancements. In this article, we will explore the 7 core technologies that drive this virtual universe.

2. Blockchain: The Backbone of the Metaverse

Blockchain technology serves as the foundation of the metaverse, as most applications rely on decentralized networks. It ensures transparency, security, and interoperability, making it an essential component of this digital space.

Key Advantages of Blockchain in the Metaverse

Decentralization & Security: Blockchain operates as a virtual ledger, recording transactions securely and transparently.

Data Integrity: Information is stored in a decentralized database, reducing the risk of data leaks and unauthorized modifications.

Immutable Data Records: Blockchain chronologically arranges data in groups known as blocks, ensuring that once a block is sealed, it remains unchangeable. This prevents manipulation and enhances trust in the metaverse.

Each block of data is linked to the previous one, forming a chain—hence the name blockchain. This structure ensures a tamper-proof system, which is crucial for the metaverse’s transparency and reliability.

3.Crypto Assets

Cryptocurrency technology is one of the most essential components of the metaverse. Since platforms within the metaverse only accept cryptocurrencies, users must first exchange real-world currencies for digital assets to engage in transactions. Additionally, non-fungible tokens (NFTs) play a crucial role in digital ownership, allowing users to purchase virtual real estate and make in-game transactions.

The Growing Value of Cryptocurrencies

With a surge in cryptocurrency investments, digital assets have become highly valuable. As more platforms accept different cryptocurrencies, their usability and accessibility continue to expand. This growing adoption rate has also influenced large investors, shifting their perspective on cryptocurrencies as a long-term financial asset.

How Cryptocurrencies Work in the Metaverse

In the metaverse, cryptocurrencies hold significant value. For instance, if you want to purchase virtual land in Decentraland, you must convert real-world currency into MANA, the platform’s native token. A similar model applies to various metaverse platforms and games, reinforcing the importance of digital currencies in this evolving virtual economy.

As the metaverse expands, cryptocurrencies will remain at the heart of digital transactions, shaping the future of decentralized finance (DeFi) and virtual economies.

4.AR & VR

Augmented Reality (AR) and Virtual Reality (VR) engines play a crucial role in making the metaverse an engaging and immersive digital experience. These technologies help create three-dimensional environments, enhancing interactivity and realism within the virtual world.

While VR and the metaverse may seem identical, they are fundamentally different. Here are some key distinctions:

Virtual Reality (VR) is just one component of the broader metaverse, which integrates multiple technologies beyond VR.

VR enables users to experience 3D simulations, but it lacks physical simulations, which are essential for a fully immersive experience.

Augmented Reality (AR) bridges this gap by adding physical interactions, allowing users to hear, feel, and interact with virtual elements as if they were physically present.

The fusion of AR and VR is expected to revolutionize the metaverse, making it more realistic and attracting significant investments from global tech companies.

Artificial Intelligence: Powering the Metaverse with Smart Interactions

Beyond AR and VR, Artificial Intelligence (AI) plays a pivotal role in enhancing the functionality and efficiency of the metaverse. AI contributes to:

1. Smart Decision-Making and Data Processing

AI-driven algorithms improve business strategies, decision-making, and computing speed within the metaverse.

Machine learning techniques allow AI to analyze massive datasets and generate real-time insights.

2. AI-Powered Non-Player Characters (NPCs)

NPCs are present in almost all video games, but with AI, they become more lifelike by adapting to player actions.

AI allows NPCs to interact in multiple languages, enhancing user experience across different metaverse environments.

3. AI-Generated Metaverse Avatars

AI algorithms can analyze 2D and 3D images to create realistic metaverse avatars.

AI enhances avatars by refining facial expressions, hairstyles, outfits, and features, making virtual identities more dynamic.

The Future of AR, VR, and AI in the Metaverse

The combination of AI, AR, and VR is set to reshape the metaverse, making it smarter, more interactive, and deeply immersive. As technology advances, we can expect an enhanced user experience, attracting massive investments and accelerating the growth of the metaverse ecosystem.

5.3D Reconstruction

Although 3D reconstruction is not a new technology, its adoption has surged in recent years—especially during the pandemic, when in-person visits to stores and properties became difficult. Many businesses turned to 3D reconstruction to create virtual showroom tours and real estate previews, enhancing customer experience in a digitally connected world.

In the metaverse, 3D reconstruction technology is essential for creating realistic environments, making the virtual world look and feel more like real life. As users demand more immersive experiences, the importance of 3D reconstruction is set to grow, transforming how we navigate and interact within the metaverse.

The Internet of Things (IoT): Connecting the Physical and Virtual Worlds

The Internet of Things (IoT), first introduced in 1999, refers to a network of connected devices that communicate via the internet. From voice-activated speakers and smart thermostats to medical devices, IoT-enabled systems can process data in real-time, adapting to user needs automatically.

In the metaverse, IoT integration offers several key advantages:

Real-World Data Integration: IoT applications can collect real-time data (e.g., weather conditions, temperature, and user interactions) to create dynamic environments within the metaverse.

Seamless Connectivity: IoT enables a fluid connection between 3D virtual worlds and physical-world devices, allowing for real-time interactions across multiple platforms.

AI and Machine Learning Optimization: By utilizing AI and machine learning algorithms, IoT can analyze collected data, further enhancing and optimizing the metaverse experience.

The Future of 3D Reconstruction and IoT in the Metaverse

The integration of 3D reconstruction and IoT is set to revolutionize the metaverse, enabling a hyper-realistic, interactive, and data-driven digital world. As these technologies evolve, they will play a critical role in shaping the next generation of immersive experiences.

6.Edge Computing and 5G

Edge computing, widely used in commercial applications, plays a crucial role in enhancing data transfer speed and reducing latency. This is essential for the metaverse, where computers must handle intensive data processing to deliver a smooth, immersive experience without disruptions.

Another key enabler of the metaverse is the widespread availability of 5G networks. Previously, slow processing speeds and network lag were common issues when accessing the metaverse. However, with the expansion of 5G technology and its affordable accessibility, users can now experience the metaverse on desktop computers, VR headsets, and mobile devices without latency-related interruptions.

Together, edge computing and 5G have revolutionized the way users engage with the metaverse, enabling seamless real-time interactions and a truly immersive digital experience.

7.Challenges Facing the Metaverse

Despite its rapid growth and rising investments, the metaverse is still in its early stages and faces several challenges that must be addressed for long-term success.

1. Security Concerns

With the rise of crypto transactions and blockchain-based assets, the metaverse has become a target for scams, hacking, and malware attacks. If the metaverse is to evolve into a permanent and serious digital ecosystem, its security infrastructure must be strengthened to ensure users’ assets and identities remain protected.

2. Privacy Risks

Privacy is another major concern, as the metaverse relies on AR & VR devices, webcams, and motion-tracking sensors—all of which can be vulnerable to cyber threats. Hackers have previously exploited such devices to spy on users, raising concerns about data privacy and user safety. If the metaverse is to gain mass adoption, it must implement robust privacy measures to prevent unauthorized access and ensure users feel secure in this virtual space.

The Future of the Metaverse: A Transformational or Forgotten Vision?

The metaverse is evolving at an unprecedented pace, but history has shown that technological innovations can fade into obscurity if they fail to adapt to user needs. For the metaverse to become the future of social media, it must address key challenges and build trust among users.

There is enormous potential for the metaverse to thrive, but the ultimate question remains:

Will the metaverse overcome these challenges and redefine the digital world, or will it become just another failed project that lost the trust of the global audience?

You May Also Like

Follow us on TWITTER (X) and be instantly informed about the latest developments…

Copy URL



Source link

AI winds in Web3: AI DApps reach 2.2 million users – Metaverseplanet.net

0
AI winds in Web3: AI DApps reach 2.2 million users – Metaverseplanet.net


In January, gaming and decentralized finance (DeFi) continued to dominate the decentralized applications (DApp) market. However, AI-powered DApps also gained traction, recording 2.2 million unique active wallets, according to a DappRadar report released on February 6.

DApp Market Overview: Gaming and DeFi Maintain Dominance

According to DappRadar, the average number of unique active wallets (UAWs) per day reached 26.7 million in January. Although this figure represents a 6% decline compared to December, DeFi applications remained the leading category in the DApp industry, further expanding their market share.

DeFi DApps accounted for 28.1% of all active wallets.

Gaming DApps closely followed with 27.8%.

Non-fungible token (NFT) DApps held 16.1%.

SocialFi applications made up 6.3%.

AI-Powered DApps Surpass SocialFi with 2.2 Million Active Wallets

AI-powered DApps saw significant growth, capturing an 8.5% market share, surpassing SocialFi applications in the process. DappRadar highlighted AI as a “huge growth sector”, stating that it could trigger the next bull market in the Web3 ecosystem.

In January, the most popular AI DApp was “LOL”, which recorded an astonishing 28.6 million unique active wallets. Dmail Network followed with 4.9 million, while the virtual influencer platform MEET48 had 2.8 million active wallets.

AI and Web3: A Growing Intersection

On February 4, researchers from Switzerland-based crypto bank Sygnum Bank identified crypto AI agents as one of the emerging trends of 2025. They noted a significant rise in interest in AI-related crypto projects, but also pointed out that while these AI agents are gaining attention, they still struggle to prove their long-term value, making the space highly speculative.

Despite this, the integration of AI with Web3 continues to accelerate. On February 6, stablecoin issuer Tether announced its entry into the AI sector. Tether CEO Paolo Ardoino revealed on X (Twitter) that the company’s AI division has developed an AI translation tool, voice assistant, and Bitcoin wallet assistant.

The Future of AI in the DApp Market

While the impact of AI on the DApp ecosystem remains uncertain, DappRadar believes this category will continue expanding. With increasing Web3 and AI integration, the next wave of innovation in the crypto space could be AI-driven.

What Do You Think?

Will AI-powered DApps become a major force in the Web3 ecosystem, or is this trend overhyped? Share your thoughts in the comments!

You May Also Like

Follow us on TWITTER (X) and be instantly informed about the latest developments…

Copy URL



Source link

Vitalik Buterin: Scaling Layer 1 Gas Limits By 10x Offers Significant Value

0
Vitalik Buterin: Scaling Layer 1 Gas Limits By 10x Offers Significant Value


In Brief

Vitalik Buterin argues in his new article that increasing Layer 1 gas limits can simplify and enhance the security of app development, even if most apps are hosted on Layer 2 networks.

Vitalik Buterin: Scaling Layer 1 Gas Limits By 10x Offers Significant Value

Ethereum co-founder Vitalik Buterin published an article discussing the reasons behind higher Layer 1 gas limits, even in an Ethereum ecosystem where Layer 2 solutions dominate. 

An ongoing debate within the Ethereum roadmap centers on how much to raise the Layer 1 gas limit. Recently, the gas limit was increased from 30 million to 36 million, expanding capacity by 20%, and there is support for further increases. These increases are made feasible by recent and planned technological improvements, such as better efficiency in Ethereum clients, reduced storage requirements from EIP-4444, and eventual transitions to stateless clients.

However, before proceeding with these increases, Vitalik Buterin raises an important question: in the context of Ethereum’s rollup-centric roadmap, are higher Layer 1 gas limits truly beneficial in the long term? While gas limits are relatively easy to increase, they are difficult to reverse, and lowering them could have long-lasting consequences, particularly in terms of centralization. 

He argues that increasing Layer 1 gas limits can simplify and enhance the security of application development, even if most applications are hosted on Layer 2 networks. However, Vitalik Buterin emphasizes that his goal is not to argue for or against the broader idea of hosting more applications on Layer 1, but rather to suggest that scaling Layer 1 by approximately 10x could provide long-term advantages, regardless of the outcome of that debate.

Vitalik Buterin Unveils Gas Requirements For Various Use Cases: Censorship Resistance, Asset Movement Between Layer 2 Networks, Layer 2 Mass Exits, And More

Vitalik Buterin analyzes several use cases to estimate Layer 1 gas requirements, and based on his calculations, he concludes that for censorship resistance, Layer 1 gas needs with current technology are less than 0.01x, while with more ideal technology, the requirements remain the same. To make Layer 1 gas affordable, he estimates the need to scale by approximately 4.5x. When analyzing cross-Layer 2 asset movements, Vitalik Buterin observes that gas requirements with current tech are about 278x, while ideal technology reduces it to 5.5x, and to remain affordable, the need is around 6x.

In the case of mass exits from Layer 2 networks, he suggests that with present-day technology, gas requirements could be anywhere from 3x to 117x, while with ideal technology, they range from 1x to 9x, and to keep it affordable, the needs could be between 1x and 16.8x. For ERC-20 token issuance, the gas requirement with current technology is less than 0.01x, the same as with ideal technology, but to be affordable, it could range from 1x to 18x.

Further considering keystore wallet operations and Layer 2 proof submissions, Vitalik Buterin calculates that for keystore wallets, gas requirements with current technology are about 3.3x, while with ideal tech, they reduce to 0.5x, and to remain affordable, the needs increase to approximately 1.1x. For Layer 2 network proof submissions, the figures are 4x with current technology, 0.08x with ideal technology, and around 10x to stay affordable.

Vitalik Buterin also notes that the Layer 1 gas needs with both current and ideal technologies are additive. For example, if keystore wallet operations consume half of the current gas capacity, there must be enough space left to handle a Layer 2 mass exit. Additionally, his cost-based estimates are approximate, and it is difficult to predict how gas prices will respond to changes in the gas limit, particularly in the long term. There is considerable uncertainty regarding how the fee market will evolve even under stable usage conditions.

Overall, the analysis suggests that there is significant value in scaling Layer 1 gas limits by about 10x, even in a world where Layer 2 networks dominate. This implies that short-term scaling of Layer 1 in the next 1-2 years would be beneficial, regardless of the long-term trajectory.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author


Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

More articles


Alisa Davidson










Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.








More articles



Source link

Ethereum testnet goes live with Pectra upgrade as April mainnet launch looms

0
Ethereum testnet goes live with Pectra upgrade as April mainnet launch looms


Ethereum’s Pectra upgrade is already live on Ephemery, a testnet of the blockchain network, in preparation for an April mainnet launch.

On Feb. 13, Tim Beiko, Ethereum Foundation Protocol Support Lead, wrote:

“Ephemery now supports Pectra!”

This confirms a statement from Christine Kim, a Galaxy Digital researcher who revealed on X that Ethereum clients have focused on scheduling key testnet forks to ensure a smooth transition to Pectra.

She stated:

“All client teams said today they are on track to put out final testnet releases in the next 24 hours.”

According to her, the Holesky testnet fork is scheduled for Feb. 24, while Sepolia will follow on March 5. These test deployments serve as critical trial runs, allowing developers to identify and address issues before the mainnet activation.

Barring major setbacks, Pectra’s mainnet deployment is expected approximately 30 days after the Sepolia fork.

Kim noted that developers will finalize the mainnet upgrade date and timestamp only after Pectra is live on Sepolia. If successful, the Pectra upgrade would most likely occur in April.

Faster network upgrades

Meanwhile, Nixo Rokish from the Ethereum Foundation’s protocol support team noted that Ethereum’s core developers advocated for a more frequent upgrade cycle to improve the network’s adaptability.

Nixo said:

“Pretty strong consensus from the Pectra Retrospective post that the people want faster fork cadences… that’s going to mean less dilly-dallying about scope and more aggressively presented opinions.”

This aligns with the views of industry leaders like Paradigm, who have emphasized that Ethereum has the resources and talent needed to implement upgrades faster. The venture capital firm wrote:

“Ethereum has the resources it needs — incredible researchers and engineers eager to build the future. Empowering them with a mandate to move faster, and in parallel, will enable Ethereum to solve problems faster and avoid getting bogged down in premature debates.”

Fusaka

Considering this, it was unsurprising that the developers swiftly turned their attention to Ethereum’s next major upgrade, Fusaka.

Beiko revealed that the developers are reviewing proposed protocol changes, with several high-priority improvements under consideration. The deadline for submitting proposals is set for March 13, allowing the community to weigh in before March 27.

Meanwhile, a proposed timeline sets April 10 as the finalization date for Fusaka’s upgrade scope. By defining Fusaka’s framework early, developers aim to ensure a seamless transition and an efficient implementation process once Pectra is fully deployed.

Mentioned in this article

Blocscale



Source link

Meta’s Metaverse Vision: A Final Decision in 2025? – Metaverseplanet.ne

0
Meta’s Metaverse Vision: A Final Decision in 2025? – Metaverseplanet.ne


The metaverse, which Meta has been aggressively developing under Reality Labs, has so far resulted in billions of dollars in losses rather than the revolutionary transformation the company had envisioned. With 2025 approaching, Meta faces a crucial turning point—will the metaverse become a visionary achievement or be remembered as a legendary failure?

The Countdown for Horizon Worlds

Reality Labs, Meta’s division responsible for virtual and augmented reality, has faced major financial setbacks since 2020, with cumulative losses exceeding $60 billion. According to CTO Andrew Bosworth, the fate of Horizon Worlds—Meta’s flagship metaverse platform—will be decided this year.

In an internal memo, Bosworth emphasized that the success of Horizon Worlds’ mobile version is critical for the future of the metaverse. If the platform fails to gain traction among users, it could mark the end of Meta’s large-scale metaverse ambitions.

Shifting Priorities: AI and Wearable Technology

While the metaverse struggles, Meta’s AI-powered wearable technology projects, such as Ray-Ban smart glasses, are gaining greater consumer interest. CEO Mark Zuckerberg has acknowledged that 2024 will be a decisive year, particularly for the company’s smart glasses and AI-driven innovations.

Bosworth remains optimistic about Reality Labs’ product portfolio, but he stresses that execution is now more important than new ideas. With resources tightening, Meta must perfect its existing projects instead of pursuing speculative initiatives.

A Possible Metaverse Exit?

Despite Meta’s persistence, the failure of Horizon Worlds’ mobile launch could lead the company to reconsider its metaverse investments. This potential shift in focus is underscored by Bosworth’s description of Reality Labs’ journey as an “epic adventure”—a phrase that subtly hints at internal skepticism regarding its future viability.

a blue logo with black background

If the Quest series and Horizon Worlds fail to create a breakthrough moment, Meta may begin gradually reducing its metaverse-related spending. Instead, it could redirect funds into AI, wearable tech, and other emerging sectors where it sees greater immediate potential.

Conclusion: 2025 – A Make-or-Break Year

Meta’s high-stakes metaverse gamble is approaching a critical decision point. After years of financial losses and slow adoption, 2025 could determine whether the metaverse remains a core part of Meta’s future or fades into the background.

As consumer interest shifts toward AI-driven products, Meta may have no choice but to scale back its metaverse ambitions. Whether Horizon Worlds succeeds or fails, one thing is clear: the company cannot afford another year of massive losses without a tangible return on investment.

📌 Will Meta double down on its metaverse vision, or will 2025 mark the beginning of a strategic retreat? The answer lies in the success—or failure—of Horizon Worlds.

You May Also Like

Follow us on TWITTER (X) and be instantly informed about the latest developments…

Copy URL



Source link

Will the Metaverse project be able to make a comeback? – Metaverseplanet.net

0
Will the Metaverse project be able to make a comeback? – Metaverseplanet.net


The Metaverse, a project that has received massive investments, has yet to achieve the desired success. Despite years of development and billions in spending, Meta’s Reality Labs has struggled to deliver the expected commercial breakthrough.

According to Andrew Bosworth, the company’s CTO, 2025 could be the decisive year for the future of the metaverse. If Horizon Worlds and the Quest series fail to make a major impact this year, Meta’s metaverse investments could go down in history as a “legendary failure.”

The Future of Reality Labs Remains Uncertain

Internal memos from Bosworth highlight that the future of Reality Labs is unclear. While Meta continues to invest heavily in virtual reality (VR) and augmented reality (AR), the success of the mobile version of Horizon Worlds is critical. To secure its market position, Meta plans to launch a series of AI-powered wearable technology products. CEO Mark Zuckerberg has also stated that 2025 will be an extremely busy year, with a strong focus on smart glasses and AI-driven innovations.

Can Meta Turn the Metaverse Around?

According to Bosworth, Reality Labs now has the best product portfolio in its history. Instead of generating new ideas, the focus is on perfecting existing projects. However, despite these efforts, Meta has struggled to attract a large user base. Since 2020, Reality Labs has accumulated over $60 billion in losses, making the success of Horizon Worlds’ mobile version a make-or-break moment.

Recently, Ray-Ban smart glasses and artificial intelligence projects have received more attention than Meta’s metaverse initiatives. If Horizon Worlds and the Quest series fail to gain traction, it is highly likely that Meta will shift its focus away from the metaverse and explore new markets. As uncertainty looms within the company, 2025 will determine whether Meta continues with the metaverse or abandons the project for different ventures.

Will Meta’s metaverse vision survive, or will it become one of the biggest tech failures? Share your thoughts in the comments!

You May Also Like

Follow us on TWITTER (X) and be instantly informed about the latest developments…

Copy URL



Source link

‘Entitled’ Deal or No Deal contestant Ellie defended after being branded ‘rude’ by viewers

    0
    ‘Entitled’ Deal or No Deal contestant Ellie defended after being branded ‘rude’ by viewers


    A Deal Or No Deal contestant Ellie was defended on social media after being slammed by some viewers yesterday (Thursday, February 13).

    Yesterday’s episode saw Ellie take to the hot seat in the hope of winning a huge cash prize.

    Ellie was on the show yesterday (Credit: ITV)

    What happened on Deal Or No Deal yesterday?

    Yesterday’s edition of Deal Or No Deal was Ellie’s turn to try and win big.

    Taking her seat, Ellie told Stephen Mulhern that she was a firm believer that “anything is possible” and that she could win a huge cash prize.

    “I’m just here to beat the banker,” she said.

    Towards the end of the show, Ellie only had the chance of winning £10,000, £2,000, or 50p.

    She was offered a deal of just over £2,400 by The Banker, but she opted to reject it.

    She then managed to successfully get rid of the 50p box, The Banker then offered Ellie £4,825. However, she rejected that offer too.

    She then opened her box to find that she’d managed to win £10,000!

    Ellie on Deal or No Deal

    Ellie was criticised (Credit: ITV)

    Viewers slam Ellie

    However, some cruel trolls were watching the show yesterday, and they were keen to slam “rude” and “entitled” Ellie. Many took umbrage with the fact that she didn’t say “please” to The Banker.

    Others had an issue when the player told the Banker to make her offer a “decent one”.

    Even The Banker thought she was being cheeky, telling her that people “commonly say please” before asking for a good offer!

    Fans weren’t impressed with her.

    “That Ellie is one rude cow. I can’t be the only one who thinks this,” one viewer fumed.

    Mean girl energy.

    “She comes off nasty and a mean girl,” another tweeted. “What a horrible obnoxious entitled brat she was. Generally, since the show restarted the people have been great. She is the exception,” a third wrote.

    “Mean girl energy,” another said.

    Stephen Mulhern and Ellie laughing on Deal Or No Deal

    Ellie was defended by fans (Credit: ITV)

    Deal Or No Deal fans defend Ellie

    However, Ellie had plenty of viewers defending her on social media yesterday.

    A confident and pretty female contestant called Ellie plays #DealOrNoDeal and the amount of middle-aged guys giving her stick is very telling & pretty vile,” one fan tweeted. 

    “Yes, the classic tale—put a confident, attractive woman in the spotlight, & suddenly fragile egos start cracking like cheap glass. Maybe if these blokes spent less time whining & more time leveling up, they wouldn’t feel so threatened by a woman who knows her worth,” another said.

    “Goodness me aren’t folk nasty about somebody who they don’t even know! Well done Ellie,” a third wrote.

    “GWARN ELLIE, Awww I’m so happy for her man,” another said.

    Read more: Deal or No Deal fans fume over contestant’s ‘attention-seeking’ mum as daughter wins £50,000

    Deal Or No Deal airs on weekdays from 4pm on ITV1 and ITVX. 

    Nay Nay takes the BIGGEST Deal EVER! | Deal or No Deal

    What do you think of fans trolling Ellie? Leave us a comment on our Facebook page @EntertainmentDailyFix and let us know.



    Source link

    Popular Posts

    My Favorites