Metaverse

Home Metaverse Page 160

Google Introduces Stitch: A Perfect Tool for Beginners in the Coding Scene

Google Introduces Stitch: A Perfect Tool for Beginners in the Coding Scene


The number of tools capable of writing code is rapidly increasing. At the recent Google I/O 2025 event, Google unveiled its new AI-powered coding and design assistant called Stitch. When it comes to coding, artificial intelligence has now become a central pillar — and Google has made its move in this field by introducing Stitch at I/O 2025.

Stitch helps developers create web and mobile application interfaces using just a few words or even an image. It also generates the corresponding HTML and CSS code for these designs. Users can choose between Gemini 2.5 Pro or Gemini 2.5 Flash models while using Stitch. The tool supports both coding and interface design.

Several other AI tools for programming have also emerged, including those from companies like Anysphere, Cognition, and Windsurf. Recently, OpenAI introduced its own solution called Codex, while Microsoft rolled out updates to GitHub Copilot.

Although Stitch may seem more limited compared to some of its competitors, it stands out with its customization features. Designs can be exported to Figma, and developers have the freedom to edit the code as needed. Google Product Manager Kathy Korevec describes Stitch as “an easy place to start coding and designing,” adding that their goal is to make it easier for everyone to bring their ideas to life.

You Might Also Like;

Follow us on TWITTER (X) and be instantly informed about the latest developments…

Copy URL



Source link

Inside Metacces: Exploring the Future of Hybrid Reality and NFTs | NFT News Today

Inside Metacces: Exploring the Future of Hybrid Reality and NFTs | NFT News Today


Inside Metacces: Exploring the Future of Hybrid Reality and NFTs, we examine how this forward-thinking project merges AI, blockchain, and AR/VR for a new level of digital engagement. From personalized AI companions to advanced NFT tokenomics, discover how Metacces shapes tomorrow’s experiences.

Key Takeaways

Metacces combines AI, blockchain, and augmented reality into one immersive platform

Upcoming IDO aims to raise $25 million at a price of $0.25 per ACCES token

Users can develop personalized AI companions (Oli and Ola) for enhanced digital interactions

BLACXES function as a signature NFT asset, granting rights and dividends

The core team includes founders, a blockchain architect, and cybersecurity professionals

What Is Metacces and Hybrid Reality?

Metacces is a bold initiative that merges artificial intelligence, blockchain, and extended-reality technologies. By overlapping digital layers onto physical environments, it fosters a “hybrid reality” where users can view AI-driven virtual companions, tokenized NFTs called BLACXES, and interactive challenges woven into everyday settings.

In this setting, your AI helper known as Oli or Ola, supports your exploration within Metacces, adapts to your preferences, and continually grows smarter the more you engage. The project’s ultimate goal is to build an engaging space that harmonizes the physical and digital, offering a glimpse into a future where these two spheres unite seamlessly.

The Metacces Experience

The platform operates via a combination of AR, VR, and proprietary blockchain technology. Through a mobile app or a Telegram mini-app, individuals can create AI companions, scan DNA codes to generate digital identities, and gather in-app points for completing various activities. Metacces positions BLACXES as “evolved” NFTs that confer ownership privileges, voting power, and potential financial returns to holders.

How Does It Work?

AI Companions (Oli, Ola): You start by creating a personalized AI assistant that learns and adapts over time.

BLACXES as NFTs: These tokenized digital assets offer ownership and voting rights within the ecosystem.

Access Journey: A tiered progression system with twelve levels, each unlocked through quests, building, and exploration in the hybrid reality environment.

Gamified Quests: Solve riddles, locate hidden items, and participate in airdrops for a chance to acquire BLACXES and ACCES tokens.

Team Behind Metacces

Metacces has assembled a cross-functional team with backgrounds in AI, blockchain, cybersecurity, and software development. The group includes:

Egor Kondratenko (Founder)

Osama Al-jassem (Co-Founder & COO)

Shadi Ayoub (Blockchain Architect, Cloud Engineer)

Sharif Naim (Social Entrepreneur)

Yossef E. (Software Developer)

Each member of the team brings a unique perspective and experience to the project. The founders emphasize network transparency, while the cybersecurity specialists concentrate on safeguarding data. Their CertiK KYC verification and platform security audits reinforce trust in the team’s credibility.

Upcoming IDO and Tokenomics

Metacces has a native utility token named ACCES. The team has completed multiple private sales and is preparing for a public Initial DEX Offering (IDO):

IDO Price: $0.25 per ACCES

Target Raise: $25 million

Total Supply: 500,000,000 ACCES

Allocation Overview

Ecosystem Development: 24% (120 million tokens)

Public Sale: 20% (100 million tokens)

Reserved Funds: 20% (100 million tokens)

Team Allocation: 11% (55 million tokens)

Marketing: 10% (50 million tokens)

Private Sales: 10% (50 million tokens)

Seed Funding: 5% (25 million tokens)

A structured vesting schedule controls the distribution of tokens over time, limiting immediate circulation. The next significant token unlock date is June 1, 2025, with 2.15% of the total supply becoming available.

Real-World Applications

Metacces envisions its hybrid reality accommodating multiple use cases:

Retail and Marketing: Brands can gamify shopping experiences with hidden digital items in physical stores.

Education: Interactive AR-based lessons help students experience immersive, hands-on learning, guided by AI companions.

Tourism: Travelers can explore destinations enriched by digital overlays, collecting BLACXES for achievements.

Social Interaction: Communities can gather in augmented events where VR, AR, and blockchain-based ownership intersect.

Frequently Asked Questions

Q1: What makes BLACXES different from typical NFTs?

BLACXES integrate ownership rights, governance authority, and potential dividends. They act as a core asset within the Metacces ecosystem, beyond simple collectible art.

Q2: How do I join the Access Journey?

You begin by creating an account through Metacces’ mobile app or Telegram mini-app, then follow onboarding steps that include configuring your Oli or Ola AI companion.

Q3: Is the Metacces blockchain public or private?

Metacces uses its own blockchain to ensure decentralization, security, and transparency. This enables user ownership of digital assets and data.

Q4: How do I get ACCES tokens?

You can get ACCES tokens in the upcoming IDO or by joining in-app activities, airdrops and future exchange listings.

Q5: What security measures does Metacces have?

The team has done CertiK KYC verification and security audits and has cybersecurity specialists to protect the system from potential threats.

Conclusion

Metacces aims to provide a sophisticated hybrid reality by intertwining AI companions, NFT-based ownership, and blockchain-based economies. Its Access Journey, embedded with quests and gamification, keeps engagement high, while the advanced tokenomics model allows participants to become part of the platform’s growth. As the project prepares for its IDO scheduled for Q3 2025, many will be watching how effectively the team delivers on promises of blending digital and physical worlds in a meaningful way.

For more detailed information, including technical documentation and audit results, readers are encouraged to visit the official Metacces website.



Source link

AI Everywhere: Deep Dive into Google I/O 2025 Highlights – Metaverseplanet.net

AI Everywhere: Deep Dive into Google I/O 2025 Highlights – Metaverseplanet.net


Today, Google hosted its annual developer conference, Google I/O 2025, unveiling a suite of groundbreaking AI-powered innovations designed to touch every corner of our digital lives. From holographic video calls to virtual shopping try-ons, here’s an in-depth look at each announcement and what it means for users and developers alike.

What it is: An evolution of Project Starline, Google Beam uses advanced light-field display technology to render 3D hologram-like models of call participants.

How it works: Cameras around the user capture depth and motion data. The system then reconstructs a real-time 3D avatar in high resolution, transmitted to the other party’s Beam setup.

Why it matters:

Immersive meetings: Feels like attendees are in the same room, boosting emotional connection in remote collaboration.

Developer opportunities: SDKs to integrate Beam into custom applications—think virtual classrooms, telehealth, and remote design workshops.

Rollout: Limited enterprise pilot this summer, with a wider release slated for early 2026.

Evolution: Building on the success of Imagen 3, Imagen 4 pushes boundaries with:

2K resolution support

Fine-grained control over lighting, texture, and style

Faster inference times for on-the-fly content creation

Use cases:

E-commerce product mockups

Marketing campaigns with bespoke visuals

Game asset prototyping for studios and indie developers

Access: Available via the Google Cloud AI Platform starting Q3 2025, with pay-as-you-go pricing.

Capabilities:

Generates realistic video clips up to 30 seconds

Synchronized audio tracks, including ambient sound and dialogue

Scene transitions and camera-angle simulation

Highlights:

Voice cloning feature lets you add custom narration

Music-style transfer applies mood-fitting background scores

Implications:

Content creators can produce polished videos without cameras or studios.

Advertisers can A/B test multiple ad variants instantly.

What it does: Combines the strengths of Veo, Imagen, and Gemini into a single interface.

Key features:

Text-to-scene creation: Describe a scene, and Flow generates it end-to-end.

Smart cuts and edits: AI suggests best shot sequences.

Collaborative mode: Teams can edit simultaneously in real time.

Who it’s for: Professional editors, marketing teams, educators—anyone needing rapid video production.

New “AI” tab: Live within Google Search, powered by the Gemini AI assistant.

Capabilities:

Follow-up questions without rewriting context.

Summarized insights from multiple web pages.

Actionable suggestions (e.g., booking flights, drafting emails).

Availability:

U.S. beta users now; global rollout by end of 2025.

Developer API coming in Q4 for custom search integrations.

Tier breakdown:

AI Pro at $30/month: Priority access to Gemini chat, Imagen 4 credits, early Veo 3 trials.

AI Ultra at $250/month: Unlimited generation, enterprise SLAs, dedicated support.

Why upgrade?

Higher quotas for image/video generation

Faster response times

Exclusive features like Beam enterprise connectors.

7. Project Astra: Vision-Based AI Assistant

Core idea: Let your camera feed be an input channel for AI.

Features:

Object recognition: Identify products, landmarks, plants, etc.

Contextual tasks: “Order me another cup of coffee” after seeing your mug.

Real-world dialogue: Ask about items in view, from “What’s the nutritional info?” to “How old is that building?”

Developer hooks:

AR overlays

Custom actions tied to recognized objects

Supported languages (launch): English ↔ Spanish

How it works:

Speaker’s audio is transcribed, translated, then synthesized in the listener’s language—all under 500 ms.

Benefits:

Global teams can meet without language barriers.

Education: Bilingual classrooms become seamless.

Future languages: German, French, Japanese by Q1 2026.

9. Gemini in Chrome: Your AI Co-Pilot Browser

Integration: A new Gemini button in the Chrome toolbar for Pro/Ultra subscribers.

Capabilities:

Automated form filling and data extraction

Contextual insights on any webpage (e.g., stock performance in news articles)

Voice commands to navigate, search, or summarize

Security: Runs in a sandbox to keep browsing data private.

10. Search Live on Mobile: AI Meets Your Camera

What it is: The mobile counterpart to AI Mode, fusing live camera input with Gemini.

Use cases:

Text translation in signage or menus

Product lookup by scanning barcodes

Interactive learning: Point at a plant to get care tips

Screen sharing: Now you can show your mobile display to Gemini for step-by-step assistance.

Enhanced AI model analyzes your past conversations to craft replies that sound like you.

Features:

Tone matching (formal, casual, enthusiastic)

Suggested follow-up questions

Calendar integration for meeting proposals

12. Virtual Try-On: AI-Driven Fashion Preview

How it works:

Upload a full-body photo

Choose an item in Google Shopping and click “Try On”

AI simulates fabric drape, stretch, and fit on your body

Benefits for shoppers:

Reduces returns due to poor fit

Increases confidence in online purchases

Merchant integration: Via Shopping API, retailers can enable Try-On with minimal setup.

Features demoed:

Live memory recall: Glasses remind you where you left your keys.

On-the-fly translation displayed as subtitles in your field of view.

Partner integrations with Samsung, Warby Parker, Gentle Monster for design and optical enhancements.

Developer news:

XR SDK preview available now

ARCore extensions for spatial mapping

What This Means for You

Google’s I/O 2025 announcements mark a decisive shift towards an AI-first world. Whether you’re a developer building the next generation of immersive apps, a business seeking to streamline operations with AI, or an end-user eager for more intuitive experiences, these tools open up new possibilities:

Seamless interactions across devices and formats

Reduced friction in daily tasks—from shopping to translation

Enhanced creativity with video and image generation

Expanded accessibility through real-time translation and personalized assistance

Stay tuned as these features roll out over the coming months. If you’re a developer, explore the respective APIs and SDKs on the Google Cloud and Android developer portals to start integrating AI into your own projects today.

You Might Also Like;

Follow us on TWITTER (X) and be instantly informed about the latest developments…

Copy URL



Source link

Google Meet Adds AI-Powered Live Voice Translation – Metaverseplanet.net

Google Meet Adds AI-Powered Live Voice Translation – Metaverseplanet.net


At the Google I/O event, the company announced that Google Meet is getting a new live translation feature powered by Gemini AI. This capability will instantly translate spoken language into the listener’s preferred language—while preserving the speaker’s voice and tone.

During the highly anticipated I/O presentation, Google shared updates about upcoming innovations, especially in AI technology. One of the key updates was for Google Meet, which is widely used for virtual meetings.

Live Translation in Real-Time with Gemini

Google introduced a real-time speech translation feature for Meet that breaks down language barriers between people speaking different languages. This means you can now communicate seamlessly, even if the other person doesn’t speak your language.

Users will be able to select their spoken language and their preferred language in the Google Meet settings. From there, Gemini will automatically provide instant voice translations during the meeting.

Natural and Human-Like Voice Translation

The AI translation will work just like having a live interpreter, offering fast and fluid translations. What makes it even more impressive is that the AI retains the speaker’s voice, tone, and expressions, making the interaction feel more authentic.

In a demo shown by the company, two participants—one speaking English, the other Spanish—had a conversation that was successfully translated in real-time. The translation feature added voice dubbing for both sides, enabling realistic multilingual communication.

Availability and Languages

Initially, the Gemini-powered translation feature in Google Meet will be available only to paid Gemini subscribers, and it currently supports English and Spanish. Google also stated that German, Portuguese, and Italian will be added in the coming weeks.

You Might Also Like;

Follow us on TWITTER (X) and be instantly informed about the latest developments…

Copy URL



Source link

Gemini Gets Camera and Screen Sharing on Android and iOS

Gemini Gets Camera and Screen Sharing on Android and iOS


ChatGPT’s Popular Feature Now Available on Gemini: Use Your Camera to Talk with Gemini in Real TimeThe popular camera and screen sharing feature—previously known from ChatGPT—is now available on Gemini for both Android and iOS devices. At Google I/O 2025, the company announced that this feature is no longer exclusive to Pixel phones and is now being rolled out as a free update to all compatible devices.

Thanks to Gemini Live, you can now interact with the AI assistant by sharing your phone’s camera or screen. This allows for more natural and visual conversations. Instead of describing something verbally, you can simply show an object to the camera, enabling Gemini to analyze what it sees, such as identifying an animal species or recommending the right type of screw for a repair task.

✅ This feature is available now for Android and iOS users.

Deeper App Integration and Privacy Controls

Google also revealed that Gemini Live will soon be more deeply integrated with apps like Google Maps, Calendar, Tasks, and Keep. This will allow Gemini to offer more contextual and intelligent answers to your questions based on what it sees and what you’re working on.

When it comes to user privacy, Google has emphasized that users will have full control over which apps are integrated with Gemini. All permissions can be reviewed and changed at any time.

Originally launched only for Pixel devices last month, this powerful feature is now available to a much broader audience. You can start using it right away from your Android or iOS device.

You Might Also Like;

Follow us on TWITTER (X) and be instantly informed about the latest developments…

Copy URL



Source link

Google Announces Google Beam: Revolutionary 3D Video Communication Technology – Metaverseplanet.net

Google Announces Google Beam: Revolutionary 3D Video Communication Technology – Metaverseplanet.net


Google has officially announced that its futuristic Project Starline will now be known as Google Beam—a technology set to revolutionize video calls by making it feel as though you’re in the same room as the person you’re speaking to.

First introduced during Google’s 2021 developer conference, Project Starline was an AI-powered communication system that enabled users to see a 3D hologram-like model of the person they were talking to in real time.

Now, at the latest Google I/O event, the company revealed the next evolution of this concept, officially rebranding the project as Google Beam. According to Google, the technology is steadily becoming the world’s first AI-powered 3D video communication platform.

Feel Presence Like Never Before

With Google Beam, you will experience video conversations that feel as real as being face-to-face. The system uses specialized hardware that supports the technology, allowing both participants to feel as though they’re in the same physical space.

According to CEO Sundar Pichai, the first Beam-compatible devices will launch later this year for selected users. Pricing and detailed specifications have not yet been revealed.

How Does Google Beam Work?

The system uses a six-camera setup to capture a person from multiple angles. Then, an advanced AI video model creates a real-time 3D virtual replica of that person. This results in an experience straight out of science fiction—bringing to life the holographic communication once seen only in movies like Star Wars and Star Trek.

By combining real-time rendering, multi-angle capture, and AI-generated 3D modeling, Google Beam is transforming a once-distant dream into a cutting-edge communication reality.

You Might Also Like;

Follow us on TWITTER (X) and be instantly informed about the latest developments…

Copy URL



Source link

Inside PlayerID: PSG and Matchain’s New Platform for Athlete Data Ownership | NFT News Today

Inside PlayerID: PSG and Matchain’s New Platform for Athlete Data Ownership | NFT News Today


Paris Saint-Germain has partnered with Matchain to launch PlayerID, an AI and blockchain-powered platform giving athletes full control over their performance data while enhancing collaboration and development through predictive analytics.

Key Takeaways

PSG’s PlayerID is a digital ID platform that gives athletes control over their performance data.

Co-developed with Matchain, it uses AI and blockchain for smarter analysis and injury prevention.

Shifts data ownership from clubs to athletes, promoting autonomy and transparency.

Enables streamlined, secure data sharing across teams and departments.

Born from PSG’s Joint Innovation Studio, it’s built for global expansion and industry transformation.

What Is Paris Saint-Germain PlayerID?

PlayerID is a secure digital platform designed to consolidate an athlete’s entire performance and career data into one intelligent, blockchain-secured profile. Launched by Paris Saint-Germain in collaboration with Matchain, it leverages AI-driven analytics and decentralized data control to give athletes greater autonomy over their performance and career insights.

Historically, athlete data has been fragmented and largely controlled by clubs or leagues. With PlayerID, PSG challenges that paradigm—granting athletes direct ownership, access management, and the power to interact with their fan base through data-backed engagement.

How Does PlayerID Work?

PlayerID operates through Matchain’s proprietary MatchID technology. Here’s how the process unfolds:

Data Consolidation: All relevant athletic metrics, performance stats, and health records are unified under a single digital identity.

Privacy and Permissions: Athletes control who sees what—from coaches to agents—ensuring data remains protected.

Predictive Tools: While PSG and Matchain have not disclosed the exact AI models or platforms used, PlayerID almost certainly leverages a combination of machine learning, predictive analytics, and data visualization tools—practices now standard in elite sports technology. The blockchain layer provides data integrity and access control, while AI delivers actionable insights for performance, health, and engagement.

Fan Engagement: Athletes can share select data points to engage with fans, creating new digital connection opportunities.

Real-World Impact and Expansion Plans

PlayerID was first used within PSG’s own performance departments to support athlete monitoring and training. It’s now set for wider adoption, reflecting a shift in how sports organizations handle data—prioritizing secure, transparent systems that support both performance and long-term athlete care.

Key real-world applications could include:

Enhanced Scouting and Recruitment: With standardized, verified player data, talent scouting could become more accurate.

Integrated Teamwork: Clubs benefit from data sharing across medical, coaching, and business units.

Longevity and Well-being: Injury forecasting and load management could help with longer, healthier careers for athletes.

While the potential is clear, getting PlayerID widely adopted won’t be without challenges. Teams and organizations will need to address concerns around privacy, how easily it can be integrated, and how open people are to changing old systems.

The Role of the Joint Innovation Studio

The PlayerID project is a flagship initiative of PSG and Matchain’s Joint Innovation Studio (JIS). This hub brings together sports professionals, tech developers, and external partners to build practical solutions that address real needs in the sports world.

JIS’s core focus areas include:

Decentralized Identity: Empowering secure, self-managed interactions between clubs and fans.

Digital Collectibles: Offering exclusive, blockchain-backed assets like virtual jerseys and iconic moments through NFTs.

Collaborative R&D: Driving innovation with input from PSG’s internal departments and Matchain’s technical expertise.

Par Helgosson, Head of PSG Labs, reinforced this ethos in the announcement:

“With PSG Labs, we’re not just adopting innovation – we’re co-creating it with the best partners in the industry. PlayerID is a concrete example of how data and technology can responsibly support athletic performance and career growth.”

Similar platforms, like Microsoft’s Sports Performance Platform or Catapult’s wearable tracking tech, also aim to enhance athlete data insights. PlayerID adds to this landscape by focusing on long-term data ownership and cross-functional club integration.

Conclusion

Developed by Paris Saint-Germain (PSG) and Matchain, PlayerID is reshaping how athlete data is gathered and shared. Blockchain keeps information secure and reliable.

Meanwhile, its AI features help translate raw stats into real, actionable insights. The result is a system that respects privacy while giving teams and athletes a more structured approach to performance data.



Source link

Helix And Avalanche Commit $100M To Support Fusion Framework For Blockchain Economies

Helix And Avalanche Commit 0M To Support Fusion Framework For Blockchain Economies


In Brief

Helix and Avalanche, with the backing of Faculty Group, introduced Fusion to provide developers, enterprises, and protocols with access to customizable, modular networks designed to deliver tangible real-world outcomes.

Helix And Avalanche Commit $100M To Support Fusion Framework For Blockchain Economies

Advisory and incubation platform, Helix together with Layer 1 blockchain Avalanche and with the backing of Faculty Group, introduced Fusion—an initiative designed to support the development of purpose-driven, domain-specific blockchain ecosystems. Structured around a novel economic framework aimed at enhancing coordination across the ecosystem, Fusion provides developers, enterprises, and protocols with access to customizable, modular networks intended to generate tangible real-world outcomes.

“Fusion is about unlocking the next chapter of blockchain adoption,” said Fusion Core Contributor David Post in a written statement. “We’re building a framework that goes beyond experimentation—enabling scalable, sector-specific solutions with real-world impact and value. By combining Avalanche’s performance with a powerful suite of modular services, Fusion gives builders the tools they need to deploy meaningful applications and connect them to a thriving, interoperable ecosystem,” he added.

Fusion is designed with a two-layer structure consisting of Composers and Modules. Composers are independent Layer 1 blockchains customized for particular sectors such as AI, decentralized science, and decentralized physical infrastructure networks. Modules function as modular services—including compute resources, stablecoins, and biometric data—that support the Composers. Each Composer provides accessible software development kits (SDKs) and application programming interfaces (APIs), enabling developers to integrate these services, launch applications, and perform various operations. The Modules serve as interoperable components that can be combined through Composers to generate value for end users. These components include oracles providing real-world data like weather, sports, and commodities; financial tools such as asset swap platforms and treasury management; identity verification; decentralized data storage; and reputation systems that may offer benefits such as loyalty rewards or exclusive access via non-fungible tokens. 

Fusion’s architecture is built on Avalanche’s high-performance technology stack, utilizing the C-Chain for fast, EVM-compatible smart contract processing and Interchain Messaging (ICM) to enable secure and efficient communication between Composers and other Layer 1 blockchains. This design supports interoperability and scalability across the network.

“Fusion equips developers with the tools they need to build impactful, real-world applications on live blockchain networks,” said Nicholas Mussallem, CEO of AvaCloud, in a written statement. “While AvaCloud streamlines Layer 1 network creation, Fusion enhances these networks once they’re operational. This initiative combines the best tools for scaling blockchain technology, creating tangible value, and driving widespread adoption across industries,” he added.

Fusion To Accelerate Deployment Of Industry-Specific Composers On Avalanche 

Fusion is backed by $100 million in resources allocated to existing Avalanche initiatives such as Multiverse, Retro9000, InfraBUIDL, and InfraBUIDL AI, aimed at driving a new phase of ecosystem growth. These funds are intended to accelerate the deployment of Composers in practical industry sectors, support key Modules that deliver essential infrastructure and services, and encourage developers and builders to adopt Composer APIs and SDKs for creating effective, results-oriented applications. 

The initial Fusion ecosystem features Composers like Life Network, which assists healthcare organizations in implementing AI-based solutions for specific medical conditions such as stroke prevention. Other Composers include Kite AI, a decentralized AI platform, and Tayga, which focuses on DePIN. Future plans include the introduction of additional Composers addressing areas such as real-world assets (RWAs), identity, and decentralized finance, alongside collaboration with partners like QuickNode and Space and Time to integrate advanced Modules.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author


Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

More articles


Alisa Davidson










Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.








More articles



Source link

Nyan Heroes Airdrop: How to Claim Your Share of 6.7M NYAN Tokens | NFT News Today

Nyan Heroes Airdrop: How to Claim Your Share of 6.7M NYAN Tokens | NFT News Today


Nyan Heroes Airdrop is attracting attention with a reward pool of 6.7 million NYAN tokens. Players can join Playtest 4 from March 26 to April 13 for a chance to share in nearly $200,000 worth of prizes. This giveaway marks a 34% increase from the previous token distribution and offers 250,000 tokens to the top-ranked participant.

Boost Your Airdrop Rewards

Solana powers this cat-themed hero shooter, where each pilot commands a customizable mech to battle in fast-paced matches. The tier-based reward system, developed by former EVE Online economy director Asimakis Reppas, aims to balance rewards for both casual participants and competitive players. Challenger Tier allocates 3.7 million tokens to those who finish in the top 5,000 with at least 50 matches played.

Leaderboard Tier sets aside 2 million tokens for the best 3,000 contenders, with a minimum requirement of 25 wins. Genesis NFT holders enjoy a separate pool that offers boosted points through rarities.

Collecting Meow Points is key to moving up in the ranks. Completing in-game tasks, connecting accounts, and referring friends all contribute to point totals. Each referral adds 50 Meow Points, and logging enough points can secure a higher tier. The token itself launched in May 2024 at $0.46 and has since dropped to around $0.03, which makes the total airdrop allocation more enticing for active participants.

What’s New in Playtest 4?

Playtest 4 arrives with new content to keep gameplay fresh. Coco steps into the spotlight as the latest hero, piloting a mech called Guardian Gizmo. Gizmo deploys turrets for defense, creates magnetic platforms that lift teammates, and unleashes a powerful Scrap Goliath ultimate. Fishball mode also joins the action, requiring squads to score by launching a ball into their opponents’ goal. While ball carriers can still shoot, they cannot activate their ultimate, adding strategic depth. Combat unfolds across a new Tokyo map that pushes vertical movement and close-quarters tactics.

This free-to-play experience invites everyone to earn real crypto rewards. Genesis NFT owners benefit from extra multipliers, but any newcomer can jump into matches without making a purchase. Returning to Epic Games Store later this month, Nyan Heroes promises exciting battles and bigger incentives for both seasoned warriors and first-time explorers.

Players seeking an engaging shooter should keep an eye on the Nyan Heroes Airdrop. Staking a claim in that 6.7 million token prize pool could be as simple as joining a match, scoring some points, and putting cat-piloted mechs to the test. Grab your squad, enter Playtest 4, and aim for the top spot on the leaderboard before April 13. The prizes await.

Update

Nyan Heroes officially shut down in May 2025 after failing to secure enough funding to continue development, despite strong community interest and multiple playtests.



Source link

Four Major Web3 Gaming Projects Shutdown in One Week | NFT News Today

Four Major Web3 Gaming Projects Shutdown in One Week | NFT News Today


Four major Web3 gaming projects shut down within a week, sending the entire industry into a frenzy. This wave of closures is a harsh reminder of the need to strike a balance between blockchain and gameplay.

Key Takeaways

The closures draw attention to unsustainable funding models and unrealistic timelines.

Market metrics and weak token liquidity lead to a major drop in player engagement.

Celebrity-driven hype collapsed without solid tokenomics.

Migration and platform risks exposed vulnerabilities.

Community-driven, skill-based design is vital for future stability.

What Is Web3 Gaming?

Web3 gaming refers to titles that integrate blockchain elements—such as NFTs, tokens, or smart contracts—into traditional gameplay loops. By leveraging decentralized technologies, these games promise ownership of in-game assets and potential rewards. However, recent events underscore that strong gameplay fundamentals must come before any token-driven mechanics.

Why Four Major Web3 Projects Shut Down

Tatsumeeko: Lumina Fates – Ambition Meets Reality

Tatsumeeko: Lumina Fates, developed by Singapore-based Tatsu Works, raised $7.5 million to build a 2.5D RPG with deep social features and housing systems.

Multiple playtests and a move from Immutable X/Solana to Ronin revealed that scope creep overwhelmed their ability to deliver. After the abrupt closure, allegations of a so-called “rug pull” surfaced online—a term often used in the crypto space to describe projects that unexpectedly shut down and leave backers empty-handed. However, Tatsu Works has since pivoted to a new Discord-integrated project called Wander.

Nyan Heroes – The Funding Paradox

Solana-based Nyan Heroes secured $13 million across funding rounds, yet failed to maintain player retention following its fourth playtest. The game attracted over one million total players and tallied 250,000 wishlists across major platforms. However, the inability to reach new funding milestones signalled its downfall. With Web3 gaming investments at four-year lows, 9 Lives Interactive ran out of capital to stay afloat.

Blast Royale – Open-Sourcing as an Exit Strategy

Blast Royale by First Light Games chose to open-source its codebase after struggling to stabilize its in-game economy. Despite receiving $5 million in funding, the team’s token generation event failed to prevent financial losses. By releasing the code on June 1, 2025, the developers preserved a degree of community ownership and offered a blueprint for others to build upon.

Rumble Kong League – Celebrity Hype vs. Tokenomics

Rumble Kong League, endorsed by basketball star Steph Curry, collapsed due to failing tokenomics. Its $FAME token launched with minimal liquidity and rapidly lost value. Development stalled, community moderators went unpaid, and the NFT Kongs—once valued at over $1,600—now trade at a fraction of their original price. This stark decline shows how celebrity-driven buzz can’t mask deeper structural issues.

Core Challenges in Web3 Gaming

Overfunding Pitfalls

During the crypto boom of 2021–2024, many projects raised large sums based on promising concepts rather than proven gameplay. Unrealistic valuations forced teams to chase user metrics to satisfy investors, which often led to quick expansions that were difficult to sustain.

Token Liquidity Dilemmas

Tokens like Rumble Kong League’s $FAME or Blast Royale’s $NOOB failed to strike a balance between gameplay utility and stable value. When speculation overshadows skill-based rewards, players tend to abandon ship at the earliest sign of an economic downturn.

Platform Dependency

Several games anchored themselves to a single blockchain. Tatsumeeko’s pivot to Ronin and Nyan Heroes’ focus on Solana left them exposed to declining user bases on those platforms. Cross-chain flexibility is becoming a requirement for resilience.

Preventing Future Failures

To minimize risk and foster healthier growth, developers and investors are adopting new strategies:

Skill-Based Rewards

Emphasizing challenge-driven gameplay over quick monetary gains keeps players engaged for longer periods.

Sustainable Funding Models

Replacing massive seed rounds with phased, performance-linked investments can ensure projects scale at a manageable rate.

Open-Source Initiatives

Projects like Blast Royale show how releasing code to the community can sustain development momentum, even if the original studio steps back.

Cross-Chain Collaboration

Supporting multiple blockchains protects games from single-platform declines and widens player access.

Community-Centric Design

Tatsu Works’ pivot shows that listening to user feedback and genuine engagement is key to establishing a durable Web3 gaming ecosystem.

Frequently Asked Questions (FAQ)

Q1: Are these closures a sign that Web3 gaming is dead?

Not necessarily. Industry experts believe these failures serve as a necessary correction. The sector is refining its focus on strong gameplay and sustainable token models.

Q2: Why is tokenomics important in Web3 gaming?

Tokens can directly impact player motivation and game economies. Misaligned incentives cause volatility and retention problems. We need balanced systems.

Q3: What can developers learn from these shutdowns?

Devs should prioritize gameplay quality, plan funding structures carefully and be agile with cross-chain support. Relying on external hype or short-term profit can undermine long-term success.

Q4: Is open-sourcing game code a viable exit strategy?

It can be. By releasing the source code, studios like First Light Games allow community innovation to continue. This approach aligns with the decentralized ethos of Web3.

Q5: How can players protect themselves from risky Web3 gaming investments?

Users should research team credentials, tokenomics and community sentiment before committing funds. Be realistic in an emerging sector.

Conclusion

The sudden shutdown of four well-funded Web3 games in a single week underscores the challenges game developers face when merging blockchain tech with engaging gameplay. Despite large investment rounds and celebrity endorsements, flawed token models, inadequate funding plans, and platform limitations accelerated these failures.

Still, the lessons learned push the industry to embrace user-focused strategies, better funding frameworks, and stronger cross-chain resilience. In this way, recent closures might spark a more mature era for Web3 gaming—one defined by skill-based experiences and authentic player ownership.



Source link

Popular Posts

My Favorites