Metaverse

Home Metaverse Page 167

CoinACE Opens Early-Access Pre-Registration: Practice Crypto Trading, Earn Real ACE Tokens | NFT News Today

CoinACE Opens Early-Access Pre-Registration: Practice Crypto Trading, Earn Real ACE Tokens | NFT News Today


CoinACE, the next-generation cryptocurrency trading simulator, today announced that pre-registration for its early-access program is officially live ahead of the platform’s public launch on June 1, 2025. Tailored for both novice and veteran traders, CoinACE offers a zero-risk environment where simulated gains can be converted into real ACE tokens through its pioneering “Profit-to-Mine” mechanic.

Platform Highlights

Trade BTC, ETH, and a wide range of altcoin pairs with authentic order-book depth, funding rates, and liquidation logic—without risking actual capital.

Every time a simulated position closes at a profit, the platform automatically converts your virtual PNL into ACE-token mining power. Early participants benefit from lower mining difficulty, maximizing their initial token yields.

Challenge friends or random opponents to timed trading duels, climb the leaderboard, and claim exclusive in-game and token rewards.

Share your performance on dynamic trader profiles, follow top performers, and participate in weekly challenges to showcase your skills.

“CoinACE bridges the gap between simulated practice and real-world stakes,” said Jamie Lee, Co-Founder and CEO of CoinACE. “Our mission is to empower traders to learn, compete, and earn—turning every paper trade into an opportunity to mine tokens that hold tangible value.”

Early-Access Benefits

Pre-registered users will receive:

Exclusive ACE Airdrops – Bonus tokens awarded on day one of public launch.

Priority Access to New Features – Early invitations to beta test advanced charting tools and AI-powered analytics.

Community Rewards – Entry into a referral program offering up to 50 % bonus mining power for every new trader you invite.

CoinACE’s public debut on June 1 will mark the first time traders can seamlessly transition from simulated PNL to token mining—ushering in a new era of gamified finance. With a robust trading engine built on live market data, intuitive UX, and an engaging social layer, CoinACE aims to redefine how the next generation learns and profits from crypto markets.

How to Pre-Register

Visit CoinACE pre-register today to secure your spot. Pre-registration is free and open to users worldwide. Spaces are limited—sign up now to start mining ACE tokens risk-free!



Source link

Why It Matters for Our Digital Future, Can We Move Our Lives to a Digital Universe?

Why It Matters for Our Digital Future, Can We Move Our Lives to a Digital Universe?


Can we move our lives into a digital universe? Questioning the course of the metaverse—which will shape our future—and the dangers it may pose is of great importance!

The metaverse (lit. “beyond‑universe”) is a network of continuously open virtual environments where many people can interact with one another and with digital objects through their own virtual representations, i.e., avatars. You can think of the metaverse as a combination of immersive virtual reality, a massively multiplayer online role‑playing game, and the web. It has three core aspects: presence, interoperability, and standardization.

Presence is the feeling of embodiment—being in the same virtual space with other individuals. This sense of presence is known to improve the quality of online interactions and is achieved through VR technologies such as head‑mounted displays.

Interoperability is the seamless travel between virtual spaces that share the same virtual assets, such as avatars and digital items.

Standardization is what makes the platforms and services within the metaverse interoperable. As with every mass‑communication technology from the printing press to messaging apps, widespread adoption of the metaverse requires common technical standards. International bodies like the Open Metaverse Interoperability Group define these standards.

The Historical Road to the Metaverse

For the past 30 years we have witnessed an extraordinary technological revolution: the construction of a man‑made, digital universe. It began in the 20th century with transistors, computers, and the internet. Commerce, communication, and social relations—all human activities—started moving from the real to the digital. For example, we no longer go to the bazaar; with a few clicks we buy what we want. We no longer wait months to contact distant friends; we talk instantly. Our music, movies, and games are now in our pockets, not in the physical world.

While we’ve been severing our ties with the physical realm, the tech revolution has pressed on: enormous strides have been made in CPUs, GPUs, data processing, machine learning, artificial intelligence, and cryptography. By the 2010s these advances opened the door to even higher‑level ideas. Blockchain technology, for instance, made it possible to process data securely without central authority. Its most popular application is cryptocurrencies, but the same tech can be applied anywhere we don’t want power centralized and where privacy matters—digital identities, government, healthcare, insurance, law, and more.

Notice the common thread: digitizing physical‑world processes and interactions. These developments were all progressing, yet how seemingly independent technologies would connect was unclear—until the concept of the metaverse emerged.

The Science‑Fiction History of the Metaverse

The metaverse is not a new concept. It was first expressed in Neal Stephenson’s 1992 dystopian sci‑fi novel Snow Crash, though the idea was popularized earlier as “cyberspace” in William Gibson’s 1984 novel Neuromancer.

In Snow Crash the term metaverse means exactly what it does today: a virtual reality where individuals can interact with each other and their surroundings in three‑dimensional physical space via various virtual technologies. Decades ago Stephenson foresaw the metaverse as the successor to the internet and described it as follows:

“[Hiro Protagonist] is not meeting real people, of course. They are all part of the moving picture his computer draws according to the specs coming down the fiber‑optic cable. People are pieces of software called avatars. [Avatars] are the audio‑visual bodies people use to communicate with each other in the Metaverse. Hiro’s avatar is now on the street, and couples getting off the monorail can see him just as he can see them. Those people—probably four teenagers on a couch in a Chicago suburb—could strike up a conversation with Hiro in Los Angeles. But they probably wouldn’t say much more to each other than they could in real life.”

Since then the metaverse has appeared in films, books, and series from Avatar to Ready Player One, Otherland, Altered Carbon, and even The Matrix. The 2018 sci‑fi film Ready Player One, based on Ernest Cline’s 2011 novel, portrays the metaverse as the heir to the internet: a young orphaned hero escapes his bleak reality by entering a wondrous virtual universe called the “OASIS” through a headset and physically interacting in three dimensions.

The Etymology of the Term “Metaverse”

Stephenson likely chose the prefix meta‑ because it has both an established meaning and a newer one taking root in computer‑science culture. In ancient Greek meta means “after” or “beyond.” Metaphysics is that which is “beyond physics.” Your metatarsal bones are beyond (after) the tarsals of the mid‑foot. Metamorphosis means taking a form beyond the current one.

The word gained a technical connotation with the emergence of Lisp programming in 1958—partly thanks to its support for metaprogramming (programs that modify themselves at runtime). Keyboards for Lisp programmers even had a Meta key.

Ten years later John Lilly applied metaprogramming to people in Programming and Metaprogramming in the Human Biocomputer—a book Timothy Leary once called “one of the three most important ideas of the 20th century.” The book argued that our environment continually “programs” us, and LSD experiments could allow us to change our own programs.

In 1979 Douglas Hofstadter’s Gödel, Escher, Bach: An Eternal Golden Braid entrenched meta’s self‑referential meaning in geek culture—meta‑game (a game about the game), meta‑learning (learning about learning), meta‑cognition (thinking about thinking), metadata (data about data), HTML meta tags (information about a web page’s content), and so on.

Thus metaverse can also be thought of as a “(digital) universe about the universe.”

Stephenson likely envisioned the metaverse as a combination of “a universe beyond our own” and meta’s self‑referential sense in geek culture. Since he first used the term in the early ’90s, the word has evolved to embrace both meanings: self‑reference and another plane of reality.

The Technological History of the Metaverse

a man standing in front of a blue screen

The metaverse has begun to step out of science fiction and into tangible tech. In a July 2021 interview with The Verge, Facebook CEO Mark Zuckerberg said:

“Within the next five years I think people will see us not as a social‑media company, but as a metaverse company.”

Just three months later, at the end of October 2021, the nearly one‑trillion‑dollar firm renamed itself Meta—an attempt to escape mounting legal troubles and PR crises, but also the culmination of a strategic pivot brewing since at least 2015.

Many experts believe that in the not‑too‑distant future virtual reality will become a major part of our lives. We may spend much of our time in a vast digital universe filled with diverse virtual spaces that feel real. That universe is what we call the metaverse.

What Is the Core Logic of the Metaverse?

Put simply, the goal is to turn everything housed on today’s 2‑D internet—or anything in our lives that can be digitized—into 3‑D and move it into a shared digital universe.

During Facebook Connect 2021, Zuckerberg painted such a future.[5] Imagine watching a YouTube video today: you passively view it on a flat screen. In a “YouTube metaverse,” however, Evrim Ağacı (or any creator) would stand on a custom digital stage; you, wearing VR goggles, would be there beside us. You could interact both with us and with other viewers in real time—asking questions directly rather than typing comments. If we wished, the Q&A could even take place in a virtual café modeled after Mars.

Metaverses don’t have to replicate Earth; they can be entirely imagined worlds—other planets, Jurassic Park, the Matrix, anything. Think of the virtual backgrounds people use in Zoom calls: in a metaverse that background becomes an explorable place. Avatars can walk around, physically interact, read facial expressions, and perceive gestures.

In essence, the metaverse moves the internet off flat screens and into immersive 3‑D space. The next step—via brain‑computer interfaces like Neuralink—could enrich sensory experience (visual, auditory, olfactory, tactile) by sending signals straight to the brain.

Beyond Today’s VR: A Massive Shared Space

Compared with today’s limited VR experiences, the metaverse will unite VR and augmented reality (AR), allow avatars to move seamlessly between virtual locales, and host hundreds of millions of people simultaneously in a single, physically interactive 3‑D realm.

People could work, study, own businesses, shop, exercise, play, watch, read, attend events, socialize, build their own worlds, or join fictional universes like Star Wars. Example: friends living on different continents could gather in a shared virtual environment to collaborate or just have fun. Unlike the internet, the metaverse offers three‑dimensional physical interaction, letting distant users feel as if they are truly together.

So the metaverse is a spatial internet that can be experienced any time, anywhere, through new connections, devices, and technologies—no screens or keyboards required. Picture stepping inside Minecraft, Roblox, PUBG, or Fortnite and feeling that world as if it were real. Simply put, it is an advanced internet you inhabit rather than merely look at.

Giving Meaning to Modern Technologies

The metaverse will weave together rising tech trends and give them purpose. Cryptocurrencies could underlie its economy. As people decorate their avatars like they embellish themselves in real life, NFTs become perfect for owning digital fashion, vehicles, or accessories. As simulation power grows and the line blurs between digital and physical, what you own in the metaverse will matter as much as what you own offline—NFTs will prove that these items truly belong to you.

Virtual reality (VR) and augmented reality (AR) will be the primary technologies enabling the metaverse. With AR glasses, for instance, a tourist could walk through a city while real‑time overlays display traffic, pollution levels, or historical facts. If you can’t fully picture AR, think of apps that layer navigation arrows or Pokémon on top of the real world—the metaverse integrates that with VR and much more.

Metaverse platforms may resemble today’s online games: millions worldwide interacting in shared spaces like Fortnite or Roblox. Yet philosophically the metaverse could be even more profound. If arguments in Simulation Theory are correct, creating a digital version of our universe might be humanity’s destiny—an inevitable step for any species that reaches a certain stage of technological maturity.

The Metaverse Problem: How Realistic Is This Idea?

As you may have begun to notice, the metaverse faces both technical and philosophical hurdles.

How Far Away Are We Technologically?

Many define the metaverse as online, interactive, and delivering a near‑real sense of presence. Tech companies want to build it, yet we still lack several key capabilities.

Achieving the full potential of the metaverse will require 15–20 years—or more—of additional computing power and infrastructure.

Today’s VR industry is tiny compared with the smartphone or PC markets; catching up in experience and market size will take time.

VR and AR hardware must become stylish, inexpensive, highly sophisticated, and friction‑free. Current headsets are bulky, heavy, and run hot—uncomfortable for long sessions.

A photorealistic world that supports a few hundred simultaneous users already taxes today’s systems; hosting hundreds of millions in a hyper‑realistic environment and moderating that space is an entirely different scale, demanding decades of network upgrades.

It remains unknown whether the general public—beyond gamers—will embrace wearing such devices just to meet colleagues or friends.

These obstacles are solvable, but only if companies smell enough profit to keep pouring money into the problem. Thus the main issue is not technology per se.

How Far Away Are We Philosophically?

The deeper challenge lies in how the metaverse is being built. Our familiar reality emerged bottom‑up: simple building blocks evolved into complexity under fixed natural laws. Humanity’s push to create its first grand simulation is proceeding top‑down, led by firms such as Facebook (now Meta) that have repeatedly violated human‑rights and legal boundaries.

Early marketing speaks of “collaboration,” with artists, designers, and partners all helping shape the space. But once the puzzle pieces click, we know what happens: a small group of corporations, armed with sheer capital, will dominate the new reality. The rest of us—users and smaller creators—will be sidelined, reduced to statistics.

Metamonopoly: Who Will Own the Metaverse?

Think about the internet’s evolution. How many search engines do you actually use? How many social networks for active communication? How many major video sites? The metaverse is unlikely to escape a similar consolidation.

Companies like Meta survive on our data. We pay not with money but with time, attention, content, and personal information. They build detailed profiles and sell targeted access to advertisers. The metaverse will be no different—only far more invasive, because instead of interacting from the outside, we will live inside it.

Zuckerberg’s own demos mention that the system can track your facial expressions and gestures. Picture a pizza ad today: a five‑second clip on a webpage. Now imagine a lifelike avatar—designed to resemble someone you admire—sitting across from you in a virtual café, savoring that pizza. Political persuasion, consumer manipulation, and algorithmic steering all become exponentially more powerful.

If the coming “metaverse revolution” is left unchecked, it will likely enrich the same handful of giants. They will fund the colossal, trillion‑dollar build‑out only because it opens new markets, new patents, new consumer electronics, and, in short, new profit streams.

Who Stands to Gain from the Metaverse Revolution?

For different people the term metaverse evokes widely different visions, which makes it hard to explain to an outsider. Ultimately the metaverse will be defined by those who build and use it. Its recent surge in popularity has renewed speculation about what it might mean in practice.

Like every transformative technology, the metaverse will bring sweeping political, cultural, and social change—an effect known as technological determinism. Many experts in the tech industry say it will eventually replace today’s internet. If that happens, who builds it and how will shape the future of the economy and society at large. Meta and other giants are pouring billions into VR precisely because it opens new markets, new types of social networks, new consumer hardware, new patents—in short, new revenue.

Metaverse‑style ideas could help society organize more productively. Shared standards that unite many virtual worlds and AR layers in a single open metaverse might boost collaboration and reduce duplicated effort. Yet similar promises were made in the early days of the internet. Over time those hopes were steam‑rolled by surveillance capitalism and the dominance of firms such as Facebook.

The internet has excelled at connecting people and acting as a vast library of knowledge. But it has also privatized public spaces, filled every corner of life with ads, tethered us to a handful of mega‑corporations stronger than many nations, and enabled platforms to exploit the physical world through environmental harm.[9] The culprit is not the technology itself but the capitalist system that dictates how it is used.

Your 3‑D Digital Identity: Who Will Own It?

Deeper problems concern the worldview the metaverse will embody. One view treats us as passengers in a single, pre‑made reality; this is essentially how Facebook works—a platform that exists independently of any one user. An alternative, found in many Indigenous cultures, says we create reality through our actions and rituals, connecting people, land, life, and spirit into a living whole.

The danger is that a corporate‑run metaverse could impose a single‑reality monopoly, leaving no room for multiple ways of being. On social media we already see this: algorithms funnel us into a narrow band of predefined “reactions,” shaping experience by manipulation. Games like PUBG offer boundless creativity—yet only within rules set by the developer. A universal metaverse would shift far more of our lives into a virtual arena controlled by one or a few companies, with even heavier doses of advertising, data extraction, and behavioral limits.

Questioning the Metaverse’s Trajectory Is Not Tech Paranoia

We are not anti‑technology. Evrim Ağacı has long embraced innovations such as blockchain. But when corporate interests clash with the well‑being of our audience, it is our duty to warn. The current system is expert at exploiting human biological and psychological vulnerabilities; a metaverse left unregulated could damage social bonds even worse than social media, widen generational gaps, deepen polarization, and super‑charge consumer manipulation.

Identity‑driven consumption already rules the physical world. Nobody buys a Ferrari because they must travel faster from A to B; they buy it as a status symbol. Apple, Tesla, Gucci, Starbucks—all sell identity as much as product. Social‑Identity Theory shows how easily humans split into groups. Companies hook this need to boost sales; you think you’re buying an item, but you’re really buying membership.

Handing control of that identity market to a metaverse monopoly would be perilous. This isn’t just the old internet you observe; it’s a space you inhabit with a full digital body. We must shape it from the start around reason, science, and human values, not seductive hype.

Tim Berners‑Lee feels “heartbroken” over what the web became; Robert Oppenheimer famously quoted the Bhagavad Gita—“Now I am become Death”—upon seeing the A‑bomb. If we fail to plan wisely today, we may stare at our new creation with similar dread.

Practical concerns loom as well:

We already struggle to block harassment online—how will we handle abuse when avatars can touch us?

Can firms that leak survey data protect full‑body motion files?

In a metaverse Moon colony, which nation’s laws apply to assault?

Will Africa and the Global South enjoy equal access, or be locked out?

Are our brains, biologies, and societies remotely ready for such immersion?

Conclusion

There are many questions, and if you don’t ask them—or engage in the process—someone else will ask and answer for you. Remember: companies racing to build the metaverse have already influenced elections, enabled genocidal coordination in Myanmar, ignored human‑trafficking on their platforms, worsened teenage mental health, and spread deadly misinformation about vaccines. Those who claim “no one will own the metaverse” have literally renamed themselves Meta.

A richly sensory connection between people and universe is a wonderful idea. The metaverse itself is a wonderful idea. But if we erect our beyond‑universe atop a neoliberal capitalist system, we risk birthing a monster that deepens inequality in unprecedented ways.

The future is not yet written; it depends on the standards, governance, and ethical constraints we insist on now.

You Might Also Like;

Follow us on TWITTER (X) and be instantly informed about the latest developments…

Copy URL



Source link

Google Unveils Updated Gemini 2.5 Pro Model Ahead of I/O 2025

Google Unveils Updated Gemini 2.5 Pro Model Ahead of I/O 2025


Google has introduced the updated version of its flagship AI model, called Gemini 2.5 Pro Preview, just ahead of the highly anticipated I/O developer conference taking place in the coming weeks. The full reveal of this new version is expected during the annual event.

The enhanced model is available to developers through the Gemini API, Vertex AI, and AI Studio platforms. Despite its improvements, it remains priced the same as the current Gemini 2.5 Pro model. Google has also integrated the updated model into its Gemini chatbot applications for both web and mobile devices.

What’s New in Gemini 2.5 Pro Preview?

According to Google, this special I/O edition offers major advancements in building interactive web applications. The model has been significantly enhanced in areas like code conversion, code editing, and UI/UX design. It aims to solve many recurring issues faced by developers, making the development process smoother and more efficient.

Gemini 2.5 Pro Preview also claimed the top spot on Google’s WebDev Arena Leaderboard. Additionally, it scored an impressive 84.8% in VideoMME, a benchmark that evaluates video comprehension capabilities.

While further details about the model are currently limited, Google is expected to unveil more insights during the I/O developer conference on May 20–21.

You Might Also Like;

Follow us on TWITTER (X) and be instantly informed about the latest developments…

Copy URL



Source link

10 Free AI Apps to Help You Study in 2025 – MetaversePlanet

10 Free AI Apps to Help You Study in 2025 – MetaversePlanet


AI is not just revolutionizing the business world or social media; it’s transforming education as well. Many students today are leveraging AI technologies to help with their studies, whether it’s for planning assignments, reviewing subjects, or solving tough problems. Even better, most of these useful tools are completely free. In this article, we’ve compiled the most recent and useful AI-based study tools for 2025 that will make students’ lives easier.

Using AI to study isn’t just the future—it’s the reality of today. These tools can be an assistant guiding you through a complex math problem or a writing helper condensing long texts into simpler forms. If you’re looking for a digital companion while studying, these free AI tools will help you use AI to its fullest potential.

Best 10 Free AI Study Tools You Can Use in 2025

We’ve gathered a list of the most useful, completely free AI-powered study tools of 2025. Here are the best AI tools for studying:

1. Google for Education

Google for Education offers a suite of digital tools designed for schools and universities. It includes familiar apps like Google Classroom, Docs, Slides, Sheets, and Meet. Students and teachers can work on the same file, track assignments easily, and communicate more efficiently. Since all these tools are cloud-based, they can be accessed from anywhere.

Pricing: Basic features are free.

2. ChatGPT

ChatGPT is a versatile AI tool that can assist students across various subjects. It simplifies complex concepts in subjects like math, science, and literature. It provides assistance with grammar, text structure, and content organization. ChatGPT also helps in summarizing long texts, generating ideas, and editing written content. It’s also useful for creating study schedules and generating quizzes during exam periods.

Pricing: Free version available; paid plans are also offered for additional features.

3. Quizlet

Quizlet is a popular platform that helps students study by creating their own study sets or using others’ materials. The AI-powered assistant supports learning through flashcards, tests, and interactive games. Visuals, audio reading, and interactive diagrams make content easier to understand. The AI assistant can also generate personalized questions based on the student’s learning progress.

Pricing: Basic version is free; premium features are available for a subscription fee.

4. WorkHub

WorkHub consolidates all information sources in schools and universities into a single platform. From class notes to academic papers, content is well-organized, and both students and educators can easily access needed information. The AI-powered chatbot offers personalized support, providing instant answers to questions about educational content, making the learning process smoother and more fluid.

Pricing: 7-day free trial; paid plans available after.

5. ChatPDF

ChatPDF simplifies working with PDF documents. When you upload study materials like textbooks, research papers, or academic articles, the AI analyzes them and answers questions based on the document’s content. ChatPDF is especially useful for academic research as it saves time by allowing you to ask questions and directly retrieve relevant information from long documents.

Pricing: Free plan allows 2 PDF uploads per day and up to 20 questions; documents must not exceed 120 pages.

6. Doctrina AI

Doctrina AI supports the study process by summarizing notes and generating test questions. In the free version, students can create notes based on any topic and get book recommendations. It also lets students generate quiz questions at different difficulty levels for practice.

Pricing: Free version offers unlimited note and quiz creation. The Premium version with additional features costs a one-time fee of $39.99.

7. Socratic

Socratic, powered by Google’s AI technology, is a mobile learning app that helps students understand topics in math, science, literature, and social sciences. Students can input questions via text or voice, and the app provides step-by-step explanations along with visual aids. It simplifies complex subjects like chemistry and algebra, making them easier to grasp.

Pricing: Free.

8. Kunduz

Kunduz allows students to upload photos of unsolved problems and receive detailed solutions from real tutors as well as AI. It’s especially helpful for preparing for exams like YKS and LGS. The AI-powered smart test system offers automatic practice tests and identifies learning gaps, offering guidance and video solutions for mistakes.

Pricing: Free trial available; membership plans for the full version.

9. Course Hero

Course Hero is an AI-powered educational platform that streamlines homework help. It analyzes documents, including multiple-choice, fill-in-the-blank, or open-ended questions, providing fast answers and explanations. The AI assistant processes the content and highlights essential concepts, which helps students focus on the core material.

Pricing: Free trial available; membership plans for additional features.

10. Canva

Canva’s study planner tool makes it easy for students and teachers to create weekly or daily lesson plans in a visually organized way. With drag-and-drop templates, students can plan the content, objectives, and duration of each lesson. These planners are stored in the cloud, allowing easy access and updates from anywhere.

Pricing: Free version available.

These 10 free AI-powered apps are the best tools for students looking to enhance their study experience in 2025. By using these tools, students can improve their learning efficiency and access support tailored to their individual needs.

Follow us on TWITTER (X) and be instantly informed about the latest developments…

Copy URL



Source link

ZKsync Achieves Full EVM Equivalence, Enabling Seamless Deployment Of Ethereum-Based Apps

ZKsync Achieves Full EVM Equivalence, Enabling Seamless Deployment Of Ethereum-Based Apps


In Brief

ZKsync has achieved EVM equivalence, now allowing developers to deploy applications using standard EVM bytecode compiled from Solidity directly onto the ZKsync network.

ZKsync Achieves Full EVM Equivalence, Enabling Seamless Deployment Of Ethereum-Based Apps

Layer 2 scaling platform ZKsync announced that it has reached the Ethereum Virtual Machine (EVM) equivalence. This means developers can deploy applications using standard EVM bytecode, compiled from Solidity, directly onto the ZKsync network without relying on specialized tools like zkSolc, Foundry ZKsync, or hardhat-foundry. 

The bytecode is executed through a Bytecode Interpreter, which allows unaltered EVM code to run on the ZKsync network, preserving compatibility with Ethereum-based projects. 

While the interpreter enables this compatibility, EraVM continues to serve as the main execution environment. Rather than replacing EraVM, the EVM Interpreter functions as a layer that facilitates the execution of EVM bytecode within it. When EVM contracts are deployed, their bytecode is flagged with a specific marker so the system knows to route execution through the interpreter. During runtime, EVM opcodes are translated into operations that can be executed within the EraVM environment, ensuring behavior consistent with Ethereum standards. Gas usage is also interpreted within this context, with actual execution costs settled using native EraVM gas metrics.

The main capabilities of the upgrade include the ability to deploy contracts written in Solidity and Vyper without requiring recompilation through tools like zksolc or zkvyper. Developers can use widely adopted Ethereum development environments such as Foundry, Hardhat, and Remix without needing to install additional plugins or make tool-specific changes. 

The system also ensures that contract addresses generated using create and create2 match those produced on Ethereum, maintaining consistency across networks. Additionally, certain contracts—such as create2, multicall3, and singletonFactory (ERC2470)—are already deployed and ready for use within the environment. 

The introduction of the EVM Interpreter was formally approved through ZKsync’s governance process under ZKsync Improvement Proposal 9 (ZIP-9), and is now active as part of protocol version 27. This functionality is currently live on the Era network and will soon be extended to other ZK chains within the Elastic Network.

zkSync: Advancing Ethereum Scalability With Zero-Knowledge Rollups

Developed by Matter Labs, zkSync leverages zero-knowledge rollups to improve transaction efficiency and reduce fees on the Ethereum network. It does this by processing multiple transactions off-chain and submitting compact cryptographic proofs to Ethereum’s mainnet, which helps maintain the network’s security and decentralization while increasing scalability. 

The platform has gone through two primary stages of development. The earlier version, zkSync Lite, was designed to enable quick and affordable token transfers. Its successor, zkSync Era, adds full compatibility with the EVM, allowing developers to deploy smart contracts and decentralized applications directly, without needing to adapt or rewrite their code for a different environment. 

zkSync also features built-in account abstraction, which enhances wallet usability and flexibility. Its expanding ecosystem includes a variety of decentralized applications such as decentralised finance (DeFi) protocols, non-fungible token (NFT) marketplaces, and decentralized exchanges.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author


Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

More articles


Alisa Davidson










Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.








More articles



Source link

How Nifty Island’s AI Agents Bring Life to Virtual Worlds | NFT News Today

How Nifty Island’s AI Agents Bring Life to Virtual Worlds | NFT News Today


Anyone following Web3 Gaming projects may have spotted Nifty Island. In this Ethereum-based world, players come together to build and play games, explore islands crafted by fellow gamers, and even bring their NFTs to life.

Now, the platform has started letting players drop AI agents into their virtual spaces. Instead of lifeless non-player characters (NPCs), these newly introduced AI agents can respond in real time and even run specific actions on the blockchain – a jump from the usual “static NPC” formula.

NPCs That React

Traditional video game NPCs tend to follow a strict script or offer the same handful of repetitive lines. On the other hand, Nifty Island’s AI agents aim to bring a sense of personality and adaptation. They can hold conversations, perform in-game tasks, and acknowledge various game elements.

What Nifty’s AI Agents Can Do

Converse in Real Time: Chat with players through text-based prompts.

Perform On-Chain Actions: Trigger blockchain-logged tasks, making them more than background characters.

Recognize Game Context: Identify the environment’s quest lines, island names, or notable landmarks.

Developers behind Nifty Island wanted a user-friendly approach for adding AI-driven NPCs. If someone wants to experiment, they open the build menu, select an NPC object, and attach it to an existing AI model like Luna, AgentYP, or VaderAI. The platform also supports custom setups for folks who’d rather make their own agent from scratch. Once placed on an island, the agent can interact with any passing player. You can check out some tutorials here.

Source Nifty Island

Token-Based Deployment Rights

By default, users can only deploy AI agents on their own islands. However, there’s a notable exception for holders of 100,000 $ISLAND tokens. Anyone with that amount gains the right to activate AI agents on all Nifty Island islands, making them available to players who can drag and drop the agent into their island.

User and Platform Integrations

Beyond everyday players who like tinkering with AI, Nifty Island also caters to more serious creators. Two main paths exist:

User-Agents:

Independent developers can connect their custom AI projects through a simple REST API. Once integrated, these agents can respond to text, dance on command, and carry out tasks, all depending on how the owner programs them.

Platform Agents:

Larger-scale AI services or launch platforms (like Virtuals or Holoworld) can arrange an automated integration. This lets their AI agents become readily available to any token holder in that ecosystem, streamlining the process for creators who aren’t as technical.

From APIs to a Future SDK

While everything currently hinges on an API-based system, Nifty Island’s team has laid out plans for a forthcoming Software Development Kit (SDK). The goal is to simplify the creation and testing of these AI characters. Future enhancements may include:

Expanded Action Space: Agents could soon do more than chat. Fighting, racing, or even building structures might be on the horizon.

Unprompted Autonomy: A WebSocket-based approach could allow agents to initiate actions without waiting for a user prompt.

Behavioural AI: Thanks to collaborations (including one with ARC), agents might eventually tackle tasks usually reserved for players, like driving or battling in-game.

Agentic Building: There’s talk of letting AI agents actually manipulate the islands themselves, blurring the line between player-driven and AI-driven construction.

Source Nifty Island

Injecting Unique Knowledge

One standout feature is the ability to bake “Nifty Island knowledge” directly into an AI’s programming. That means an agent could potentially recall island details—like points of interest or puzzle hints—and share them with curious visitors. Imagine stepping onto an island and having a friendly AI greet you by name, point out landmarks, or direct you toward hidden quests.

Although Nifty Island is rooted in Ethereum, the team hasn’t shied away from exploring other networks. They are integrating with Ronin, an Ethereum sidechain built around gaming, and they’ve also launched the $ISLAND token on Solana. The end goal seems to be broader accessibility, ensuring anyone with an interest in AI-driven gameplay can join the fun, regardless of which blockchain they prefer.

Elevating the Genre

The addition of AI agents in Nifty Island speaks to a larger trend in blockchain-based games. Most virtual worlds lean on predictably scripted NPCs but incorporating AI shifts gameplay to something more organic. Characters can learn, adapt, and change over time, giving the world a richer, more lived-in feel. It’s a refreshing direction for anyone who wants deeper, more dynamic experiences in a digital realm.

At this point, it feels like Nifty Island is just getting warmed up. Given the appetite for interactive, autonomous characters in virtual spaces, it wouldn’t be surprising to see more players, creators, and AI enthusiasts hopping on board.



Source link

Everything You Need to Know About Treasure DAO AI Agents | NFT News Today

Everything You Need to Know About Treasure DAO AI Agents | NFT News Today


Treasure DAO AI Agents are reshaping digital ownership by empowering interactive NFT experiences on the blockchain. This article provides everything you need to know about these autonomous agents, from how they function to the technology behind them.

Key Takeaways

Treasure DAO introduced an AI Agent Creator that converts static NFTs into autonomous entities.

Supported NFTs include top-tier collections like Bored Ape Yacht Club and Pudgy Penguins.

Mage serves as a comprehensive platform for multi-agent orchestration in Web3 games.

$MAGIC tokens fuel AI agents’ activities, including gaming and social media interactions.

Strategic refocusing positions Treasure DAO for sustainable growth and broader AI adoption.

What Are Treasure DAO AI Agents?

Treasure DAO AI Agents are interactive, self-governing digital characters that expand how NFTs operate. Instead of simply displaying art or granting collectible value, these agents can chat with users, participate in blockchain-based games, and post on social platforms. By combining AI with blockchain Treasure DAO will enrich NFT functionality across all ecosystems.

Key Points

Autonomous Characters: The agents act independently with their behaviors based on custom settings.

Powered by Neurochimp: This proprietary framework offers memory and contextual understanding.

Uses $MAGIC Tokens: Each agent needs $MAGIC as “fuel” for its activities, ensuring a clear link between token utility and agent functionality.

How the AI Agent Creator Works

The AI Agent Creator, launched on May 5, 2025, enables NFT holders to energize their assets with AI-driven features. Supported collections include Bored Ape Yacht Club, Pudgy Penguins, Azuki, Meebits, and Milady. By paying a small fee of 0.0025 ETH (around four to five dollars), owners can activate their digital collectibles and introduce unique personalities.

Once activated, agents rely on $MAGIC tokens, Treasure DAO’s native cryptocurrency, for everyday tasks. Although 10 $MAGIC (approximately $1.60) is the suggested baseline, holders can adjust this amount depending on how active they want their agents to be. Whether it’s chatting on the Treasure Marketplace or acting in on-chain games like Gigaverse, these agents stay persistent and adaptable.

Introducing Mage: The Next-Generation Platform

In February 2025, Treasure DAO announced Mage, a broader system meant to support AI-driven experiences across decentralized environments. Built on the Eliza agent framework, Mage integrates directly with Treasure’s gaming infrastructure to provide:

Autonomous Agent Creation

Mage hides back-end compute and inference costs under “mana,” which is linked to $MAGIC. This lets creators focus on innovation while the platform manages resource allocation.

Cross-Chain Wallets

Each AI agent receives a non-custodial wallet compatible with multiple blockchains, including Treasure, Solana, and EVM networks. Control belongs to the community and the agent itself, reducing single points of failure.

Security Infrastructure

Trusted Execution Environments (TEEs) protect agent decisions, keeping data tamperproof and ensuring reliability across the network.

A Closer Look at the Strategic Shift

Treasure DAO’s move into AI follows a financial reorganization in April 2025. The team faced an annual burn rate of $8.3 million, with treasury resources running low. Leadership changes placed John Patten back in charge, and the focus narrowed to four critical areas:

Initiatives like the Treasure Chain and certain third-party game projects were discontinued, alongside a reduction in staff. This restructuring was intended to extend operational funding into 2026 while doubling down on AI as a key growth driver.

Expanding AI Within Treasure DAO

Treasure DAO is creating a broader AI ecosystem beyond the Agent Creator and Mage:

AI Agent Marketplace: A dedicated platform for buying and selling agents. Each one has distinct skill sets, memories, and achievements.

Smolworld: A simulation where “Smols” function as self-managing digital characters. Like a modern “Tamagotchi,” these entities can make decisions, form connections, and interact in on-chain games.

AI Smols in Gigaverse: Treasure first tested AI agents in Gigaverse, an RPG on the blockchain. This early version proved that AI companions could enhance engagement and autonomy in gaming.

Frequently Asked Questions

1. How can I create a Treasure DAO AI Agent?

Owners of supported NFTs can head to the AI Agent Creator. A small ETH fee and a recommended deposit of 10 $MAGIC gets things started.

2. Which NFT collections are supported right now?

The current list features Bored Ape Yacht Club, Pudgy Penguins, Azuki, Meebits, and Milady, with potential for additional projects in the future.

3. What do agents do once activated?

They can talk directly to users on the Treasure Marketplace, play in blockchain games like Gigaverse, and even post on social networks such as X (formerly Twitter).

4. What is Mage, and how does it differ from the AI Agent Creator?

Mage is a broader platform introduced by Treasure DAO that facilitates multi-agent orchestration. It handles computational costs under “mana,” fueled by $MAGIC tokens, and supports cross-chain functionality.

5. Why did Treasure DAO pivot so heavily into AI?

Financial pressure forced the pivot. Focusing on AI agent tech allowed Treasure DAO to simplify operations, reduce costs and tap into emerging digital asset trends.

Conclusion

Treasure DAO AI Agents add a new layer of interaction and utility to NFTs. With the Mage platform and Smolworld and other projects Treasure DAO will reimagine how you interact with digital assets. This approach also supports $MAGIC’s long-term value, making these autonomous characters a core element of Treasure DAO’s future in blockchain-based gaming and beyond.



Source link

Apple Now Allows NFT and Crypto Payment Links in U.S.: What You Need to Know | NFT News Today

Apple Now Allows NFT and Crypto Payment Links in U.S.: What You Need to Know | NFT News Today


Apple has just shaken up the landscape for anyone building apps around NFTs and cryptocurrencies on iOS—at least in the United States. Thanks to a recent U.S. court decision from the Epic Games v. Apple lawsuit, developers can now include external payment links that bypass Apple’s longstanding 30% commission.

This shift could be a major opportunity for Web3 projects, as it gives them more breathing room for innovation and revenue. But there are still a few important details you’ll want to keep in mind.

Key Points to Know

External links are now allowed: For the first time, U.S. developers can let users buy digital assets through external payment options, sidestepping Apple’s hefty fees.

Compliance still matters: Even though Apple can’t take a cut on outside crypto transactions, apps must still comply with Apple’s review guidelines and may be subject to oversight for user experience and disclosures.

NFT marketplaces can flourish: If you’re building a platform like OpenSea or Magic Eden, you can now offer full transaction features directly within your iOS app.

Restrictions remain in place: Crypto mining on iOS devices and rewarding users with tokens for tasks are still off the table.

U.S.-only for now: This update is strictly for the U.S. App Store. Apple’s policies elsewhere in the world aren’t budging yet.

Why Did Apple Change Its Policy?

These new rules come on the heels of the Epic Games v. Apple case, which resulted in a mixed ruling that forces Apple to allow external payment links in its U.S. App Store. Before this, if you wanted to sell NFTs or offer crypto transactions inside your iOS app, you had to use Apple’s in-app purchase system which came with a steep 30% commission.

This commission structure had long drawn criticism from developers, particularly those in emerging sectors like Web3, who argued it stifled innovation and unfairly cut into revenue.

Now, iOS apps in the U.S. can include buttons or links directing users to outside platforms for their NFT and crypto transactions.

What Developers Can Do Now

If you’re working on an app that involves NFTs or crypto, here’s how you can leverage Apple’s new stance:

Embed External Payment Links: You can integrate clear buttons or links that send users to your preferred payment platform, whether for buying NFTs or transacting with cryptocurrencies.

Enable Full NFT Functionality: Developers can now allow users to mint, list, buy, sell, and transfer NFTs without relying on Apple’s in-app purchases.

Stay Compliant: Rules and regulations around crypto vary from one place to another. Keep up with local laws and remember that Apple still bans certain activities, like incentivising users with crypto rewards or mining on devices.

How This Could Play Out in Real Life

The ripple effects of this policy could be felt across the Web3 world. Think about NFT marketplaces: before, many could only let you view NFTs on an iPhone. Now, they can turn their apps into full-blown marketplaces where you can browse and buy on the go.

For example, platforms like OpenSea may now allow users to mint, buy, and sell NFTs directly within their apps using external payment processors—something that was previously blocked.

This also opens doors for play-to-earn games, DeFi apps, and crypto wallets. With fewer hurdles (and smaller fees), developers can streamline user onboarding and make it easier for people to dive into Web3.

Still, shifting users away from a familiar in-app purchase flow could raise concerns about trust and security. Some users might worry about leaving the App Store ecosystem to complete a transaction on an external site.

It’s also worth noting that these changes don’t affect developers working outside the crypto or NFT space—standard in-app purchase rules remain in place for most apps.

Frequently Asked Questions

Is this policy worldwide?

No, it’s currently limited to the U.S. App Store. Apple hasn’t changed its global guidelines, so developers can’t add external payment links in other countries yet.

Can developers give out token rewards or allow mining?

No, Apple still prohibits crypto mining on devices and any kind of reward program where users get tokens for performing tasks.

Do I need a special entitlement to add payment links?

No special permission is required now. U.S. developers can direct users to external NFT collections or payment systems without an extra blessing from Apple.

Is Apple going to appeal the court decision?

Yes, Apple has already said it plans to appeal. However, these changes are active for now because of the court order. If Apple succeeds later, some policies might change again.

Final Thoughts

By allowing external payment links for NFTs and crypto transactions, Apple is responding to legal pressure in a way that could accelerate blockchain adoption—at least in the U.S. For many developers, this is a breath of fresh air that could spark more creative uses of Web3 technology on iOS.

At the same time, this remains a fast-moving space with potential legal and security hurdles. If you’re a developer, stay up to date on any regulatory changes and keep an eye on Apple’s ongoing legal proceedings.

For now, it’s a good moment to seize the opportunity and bring truly decentralized experiences to iOS users in the United States.



Source link

Write Prompt, Short Ready!”, 7 AI Tools That Automatically Generate YouTube Shorts

Write Prompt, Short Ready!”, 7 AI Tools That Automatically Generate YouTube Shorts


In today’s fast-moving social media landscape, AI-powered video tools are a game-changer for creators who need to pump out engaging YouTube Shorts in minutes rather than hours. Below are seven top AI solutions—each with a clear breakdown of features and pricing—that will streamline your workflow and help you stay ahead of the algorithm.

InVideo AI

Write Prompt, Short Ready!”, 7 AI Tools That Automatically Generate YouTube Shorts

InVideo AI transforms your scripts into fully edited vertical videos in a few clicks. Leveraging a massive stock library, built-in text-to-speech and voice-cloning, plus automated subtitles, it’s ideal for marketers and influencers who want polished content without the steep learning curve.

Key Features: Text-to-video generation, 16M+ stock assets, AI voiceovers, auto-captioning, drag-and-drop effects

Pricing: Free plan includes watermark and limited generative credits; Plus at $28/month (billed annually) unlocks 60 sec generative credits; Max at $48/month adds more credits and premium support

Kapwing Shorts Generator

Kapwing’s AI Shorts tool turns a single line of text into a fully edited clip, complete with B-roll suggestions, realistic voice-overs and on-screen captions. Strong collaboration and branding features make it perfect for small teams.

Key Features: One-click script-to-video, auto-subtitles, premium text-to-speech, brand kit, 4K export

Pricing: Free tier (720p, watermark, 4 min exports); Pro at $16/user per month (annual) or $24/month; Business at $50/user per month; Enterprise on request

OpusClip

OpusClip excels at slicing long-form footage into viral-ready clips in seconds. Its AI ranks the most engaging moments, adds animated captions in multiple languages and lets you auto-post to every major platform.

Key Features: AI-driven clip selection, virality scoring, multi-language captions, auto-posting, simple editor

Pricing: Free forever (60 min processing/month, watermark); Starter at $15/month (150 min); Pro at $29/month ($14.50/mo if billed annually)

Runway Gen-3/4

Runway offers cutting-edge text-to-video with both Gen-4 Turbo and Gen-3 Alpha models. Import your own SFX, remove watermarks on paid tiers and upscale to 4K—all managed via a credits system.

Key Features: Gen-4 & Gen-3 video models, SFX import, 9:16 aspect support, 4K upscaling, asset storage

Pricing: Free (125 one-time credits ≈ 25 sec Gen-4 Turbo); Standard $12/user per month (625 credits/mo); Pro $28/user per month (2,250 credits/mo); Unlimited $76/user per month (unlimited generations); Enterprise custom

Pika Labs

Pika Labs is built for ultra-short loop animations—perfect for snackable Shorts. Generate eye-catching 4-sec visuals, add music or voice, and publish watermark-free.

Key Features: Instant 4 sec loops, “Chill” unlimited generations, watermark-free outputs, commercial usage

Pricing: Basic free (300 credits total); Standard $10/month (1,050 credits/mo); Pro $60/month (3,000 credits/mo); 20% off on annual plans

YouTube Create (Beta)

Built straight into the YouTube mobile app, this free editor uses AI to sync music, add captions and suggest dynamic templates—no extra downloads required.

Key Features: In-app AI editor, music synchronization, auto-captioning, one-tap styling

Pricing: Completely free; available on iOS/Android in select countries

AI Agent Stacks (Autogen + FFmpeg)

For developers and power users, AI Agent Stacks chain together Autogen’s multi-agent workflows with FFmpeg’s media engine to fully automate script → video pipelines on your own servers.

Key Features: Storyboard automation, multi-agent orchestration, end-to-end video production

Pricing: Open-source & free (hosting and compute costs on you)

Quick Comparison Table

ToolFree PlanPaid PricingKey FeaturesInVideo AIWatermarked exports, limited AI creditsPlus $28/mo; Max $48/moScript-to-video, stock library, TTS, auto subtitlesKapwing Shorts Generator720p, watermark, 4 min exports, 10 min subtitlesPro $16/u/mo (annual) or $24/mo; Biz $50/u/moScript-to-video, auto-captions, premium TTS, 4K exportOpusClip60 min clippings, watermarkStarter $15/mo; Pro $29/mo ($14.50 billed annually)AI clip selection, virality score, multi-lang captionsRunway Gen-3/4125 one-time credits (~25 s Gen-4 Turbo), 3 projectsStd $12/u/mo; Pro $28/u/mo; Unlimited $76/u/moText-to-video, SFX import, upscaling, watermark removalPika Labs300 credits totalStandard $10/mo; Pro $60/mo4 sec loops, Chill mode, watermark-free, commercial useYouTube Create (Beta)Full features, mobile onlyN/A (free)In-app AI editor, music sync, auto captions, templatesAI Agent StacksFull functionality (self-hosted)N/A (open-source)Autogen + FFmpeg pipeline, multi-agent orchestration

With these tools at your fingertips, you can supercharge your YouTube Shorts production, save countless editing hours, and keep your channel fresh with minimal effort. Pick the one that best fits your budget and workflow, and start creating scroll-stopping shorts today!dmore is a powerful tool to complement AI-generated content.

You Might Also Like;

Follow us on TWITTER (X) and be instantly informed about the latest developments…

Copy URL



Source link

Google Introduces Gemini AI to Children Under 13! – Metaverseplanet

Google Introduces Gemini AI to Children Under 13! – Metaverseplanet


Google has announced that its generative AI tool, Gemini, will soon be accessible to children under the age of 13. However, this child-friendly version of Gemini will be slightly different from the standard one.

According to a report by The New York Times, the U.S.-based tech giant Google is preparing to make Gemini available to children under 13 starting next week. A Google spokesperson explained that this special version will include specific safety layers to ensure a secure experience for younger users. Additionally, any interactions made by children will not be used to train the AI model.

To enable children to use Gemini, parents must set up Family Link. Without this parental control integration, the service will not be available to minors.

If you want your child under 13 to use Gemini, the first step is to complete the Family Link setup. This means assigning parental supervision rights to the email account you’ve created for your child. Once done, your child will be able to safely access Google Gemini. You can find the necessary Family Link setup information through this link.

Google’s move to open up Gemini to younger children has sparked a debate: While it offers an early introduction to artificial intelligence, it also raises questions about safety and digital responsibility.

What do you think about Google’s decision to introduce AI to young children? Could this step unlock creativity, or does it come with potential risks? Don’t forget to share your thoughts with us!

You Might Also Like;

Follow us on TWITTER (X) and be instantly informed about the latest developments…

Copy URL



Source link

Popular Posts

My Favorites

Cryptocurrency Thefts Surge by 40% in 2024, Reaching $2.3 Billion –...

0
Cryptocurrency thefts have reached alarming levels in 2024, with the total amount stolen increasing by 40% compared to 2023. According to a detailed...