Metaverse

Home Metaverse Page 39

Samsung and NVIDIA’s Massive AI Factory with 50,000 GPUs | Metaverse Planet

0
Samsung and NVIDIA’s Massive AI Factory with 50,000 GPUs | Metaverse Planet


If you have been following the tech world as closely as I have, you know that the term “AI” is being slapped onto everything lately—from toasters to toothbrushes. It can be exhausting. But every once in a while, a piece of news drops that actually makes me sit up and say, “Okay, now that is a game-changer.”

The recent announcement from Samsung and NVIDIA is exactly that kind of news.

We aren’t talking about a chatbot on your phone. We are talking about the industrial backbone of the world getting a massive nervous system upgrade. Samsung and NVIDIA are teaming up to build a “AI Factory,” and the specs are frankly terrifying (in a good way). They are deploying 50,000 NVIDIA GPUs to create a manufacturing facility that thinks, predicts, and optimizes itself.

Here is my deep dive into why this matters, not just for chips, but for the future of how we make everything.

Not Just a Factory: A Giant Computer That Makes Things

Let’s be honest: when we think of factories, we usually picture sparks flying, conveyor belts moving, and maybe some robotic arms welding car parts. That is the old world.

What Samsung is building with NVIDIA is less like a traditional factory and more like a massive supercomputer that happens to have a physical output.

The Power of 50,000 GPUs

The headline number here is the 50,000 GPUs. To put that into perspective, most cutting-edge supercomputers used for climate modeling or nuclear simulations run on clusters of this magnitude.

Why does a factory need that kind of firepower?

Real-time Processing: Every sensor, every robot, and every silicon wafer produces terabytes of data.Physics Simulations: To predict how materials will behave at the nanometer scale, you need to calculate physics in real-time.Training AI Models: The factory learns from its own mistakes.

I find this fascinating because it signals a shift from “automated” (robots following a script) to “autonomous” (robots making decisions based on data).

Enter the “Digital Twin”: The Omniverse Connection

The Future of Reality — Meta’s Hyperspace Capture and the Birth of the Digital Twin World

This is the part that excites me the most as a tech enthusiast. The core software powering this beast is NVIDIA’s Omniverse.

If you are not familiar with the concept of a Digital Twin, think of it as the ultimate video game simulation. Samsung is building a virtual replica of their physical factory—down to the millimeter.

Why “Fake” Factories Save Real Money

In the past, if Samsung wanted to change a production line to make a new type of chip, they had to shut things down, retool the machines, and hope it worked. If it failed, they lost millions of dollars in downtime.

With the Omniverse Digital Twin, I see a totally different workflow:

Simulate: Engineers test the new layout in the virtual world first.Break Things: They push the virtual machines to their limits to see where they fail.Deploy: Once the simulation is perfect, the software updates the physical robots in the real world.

It is essentially “Ctrl+Z” for manufacturing. You can’t undo a mistake in the real world, but you can undo it in the Digital Twin.

My Take: This integration is going to be crucial for Samsung. The semiconductor race against TSMC is brutal. The company that can iterate faster—testing new chip designs in a virtual environment before cutting silicon—is the one that wins the decade.

Beyond Chips: Robots and Predictive Minds

While Samsung is famous for its semiconductors (the brain of your smartphone), this AI Factory concept extends far beyond just printing circuits. The announcement highlighted that this infrastructure will support robotics and mobile device manufacturing as well.

The End of “Unexpected Downtime”

One of the biggest money pits in manufacturing is when a machine breaks unexpectedly. It halts the entire line.

The AI Factory aims to solve this with Predictive Maintenance.

How it works: instead of fixing a machine after it breaks, the AI analyzes vibrations, heat, and timing data.The Prediction: The system might say, “Hey, the bearing on Robot Arm #42 is going to fail in 72 hours.”The Fix: Technicians replace the part during a scheduled break, and production never stops.

It sounds simple, but at the scale of Samsung’s production, this efficiency could save billions. It turns the factory into a biological organism that knows when it’s getting sick before the symptoms even show up.

Why This Partnership Makes Sense Now

I have been analyzing the trajectory of both companies, and this feels like the inevitable convergence of hardware and software.

For NVIDIA: They need to prove that their GPUs and Omniverse platform aren’t just for hype or gaming. Showing that they can power the world’s largest electronics manufacturer is the ultimate flex.

For Samsung: They are in a fierce battle for yield rates (the percentage of chips that actually work). Using AI to spot defects on a microscopic level—before the chip is even finished—is their best bet to overtake competitors like TSMC and Intel.

The Human Element

You might be wondering, “Does this mean robots are taking all the jobs?” It’s a valid fear. However, my perspective is slightly different. These systems are so complex that they require a new tier of human oversight. We aren’t just needing operators anymore; we need “factory pilots” who manage these digital twins. The skill set is shifting, not disappearing.

Final Thoughts: The Industrial Metaverse is Here

I often write about the “Metaverse” as a social space, but the Industrial Metaverse is where the real revolution is happening right now.

Samsung and NVIDIA aren’t just building a factory; they are building a blueprint for the future of Earth’s supply chain. When you hold your next smartphone, realize that it wasn’t just assembled; it was simulated, optimized, and birthed by a neural network running on 50,000 GPUs.

That is pretty mind-blowing.

I’d love to know what you think: Do you trust an AI-run factory to produce higher quality devices, or do you think relying this heavily on automation creates a single point of failure? Let’s chat in the comments below!

You Might Also Like;



Source link

How NASA’s Roman Telescope Will Map the Dark Universe | Metaverse Planet

0
How NASA’s Roman Telescope Will Map the Dark Universe | Metaverse Planet


I have always been fascinated by the fact that everything we can see, touch, and interact with—from the screen you are reading this on to the furthest stars captured by the James Webb Telescope—makes up only about 5% of the universe. The rest? It’s a ghost story.

We call it Dark Matter and Dark Energy, placeholders for “we have no idea what this is, but it’s holding the galaxy together and ripping the universe apart at the same time.”

For years, I’ve followed the development of NASA’s next great observatory, the Nancy Grace Roman Space Telescope. While Hubble showed us the beauty of the cosmos and Webb showed us its ancient history, Roman is designed to do something different: it is going to map the invisible.

NASA recently released detailed plans on exactly how this machine will hunt down these cosmic ghosts, and after diving into the technical papers, I have to say: the engineering here is nothing short of brilliant.

The Big Question: What is the Universe Made Of?

Before we get into the hardware, let’s set the stage. Modern physics is stuck in a bit of a crisis. We know that Dark Matter acts as the invisible glue holding galaxies together. Without it, stars would fly off into the void. On the other hand, Dark Energy is the mysterious force pushing the universe to expand faster and faster.

But here is where it gets interesting: recent measurements suggest that Dark Energy might be changing over time. If that’s true, our understanding of the universe’s ultimate fate—whether it ends in a “Big Freeze” or a “Big Rip”—could be completely wrong.

This is exactly why the Roman Space Telescope is being built. It isn’t just looking at stars; it’s measuring the structure of reality itself.

The “Wide Eyes” of Roman: A New Way to See

When I compare space telescopes, I like to think about camera lenses.

Hubble is like a telephoto lens: incredible detail, but a very narrow field of view.Roman is a high-resolution panoramic lens.

NASA’s specs reveal that Roman’s Wide Field Instrument (a massive 300-megapixel camera) will have a field of view 200 times larger than Hubble’s infrared instrument.

To put that into perspective, imagine trying to map the entire ocean by looking through a drinking straw. That’s Hubble. Now, imagine doing it with a wide-angle drone camera. That’s Roman.

The Massive Survey

NASA plans to scan approximately 5,000 square degrees of the sky. That is roughly one-eighth of the entire sky. This isn’t just a snapshot; it’s a census. By gathering such a massive amount of data, Roman will allow astronomers to perform statistical analyses that were previously impossible.

The Secret Weapon: Gravitational Lensing

So, how do you take a picture of something that is invisible? You don’t. You look at how it affects the things you can see. This is where Gravitational Lensing comes in, and it is arguably the coolest trick in an astrophysicist’s playbook.

Based on Einstein’s General Relativity, we know that massive objects warp the fabric of space-time. When light from a distant galaxy passes near a massive object (like a cluster of dark matter), that light bends.

Strong Lensing: Sometimes this bending is dramatic, creating arcs, rings, or multiple images of the same galaxy.Weak Lensing: More often, the effect is subtle—tiny distortions in the shape of background galaxies that are invisible to the naked eye.

By measuring these distortions, Roman can “weigh” the dark matter causing them.

Introducing “Kinematic Lensing”

This is the part of the recent NASA update that really caught my attention. Roman isn’t just doing standard lensing; it’s pioneering a method called Kinematic Lensing.

Standard lensing has a problem: systematic errors. If a galaxy is naturally shaped like an oval, how do you know if it’s distorted by dark matter or if it was just born that way? These uncertainties can mess up the data.

Kinematic Lensing solves this by combining two streams of data:

Imaging: The visual shape of the galaxy.Spectroscopy: The movement (kinematics) of the stars inside the galaxy.

By analyzing how stars are rotating within a galaxy, scientists can predict what the galaxy should look like. If the image looks different from the prediction, the difference is likely due to Dark Matter.

Why does this matter?

It reduces measurement uncertainty by at least 10 times.It eliminates errors caused by the galaxy’s internal alignment.It provides a much sharper map of how structures in the universe grew over billions of years.

Analyzing the Invisible “Clumps”

One of the most exciting capabilities of Roman is its ability to detect “clumps” of dark matter. The telescope is expected to identify nearly 160,000 gravitational lens systems.

Out of these, scientists estimate about 500 systems will be perfect for analyzing the substructure of dark matter. If dark matter is made of particles (as many theorists hope), it should “clump” in specific ways within galaxies. If Roman sees these clumps, it could finally tell us what particle dark matter is actually made of.

My Take: We have been stuck in a theoretical limbo regarding Dark Matter particles (WIMPs vs. Axions, etc.) for decades. Roman might finally give us the observational evidence to kill off some incorrect theories.

A New Era for Dark Energy

The stakes are even higher for Dark Energy. If Roman’s precision is as good as NASA claims (about 10 times more sensitive than current methods), we will be able to track the expansion history of the universe with unprecedented accuracy.

If Dark Energy is indeed changing—getting stronger or weaker over time—Roman will spot the statistical anomaly. This would be a Nobel Prize-level discovery that rewrites physics books. It would mean Einstein’s “Cosmological Constant” wasn’t a constant after all.

Launch Timeline and What’s Next

The assembly of the telescope was completed back in December, which was a huge milestone. Currently, the team is prepping for the final stretch.

Summer 2026: The telescope moves to the Kennedy Space Center in Florida.Launch Window: Late 2026 or Early 2027.Vehicle: SpaceX Falcon Heavy.Operations Start: Approximately 90 days after launch.

I am genuinely excited to see this launch. While Webb is looking at the “beginning,” Roman is looking at the “whole.” It’s the context we have been missing.

When I look at the simulations of what Roman will achieve, I realize we are on the edge of a golden age of cosmology. We are moving from “guessing” what the dark universe looks like to “mapping” it.

I’d love to hear your thoughts on this. Do you think Dark Energy is a constant force, or do you think Roman is going to find something that breaks our current understanding of physics? Let’s discuss it in the comments!

You Might Also Like;



Source link

Shrapnel’s Free-to-Play Launch Is Finally Happening | NFT News Today

0
Shrapnel’s Free-to-Play Launch Is Finally Happening | NFT News Today


Shrapnel is heading into Q1 2026 with strong momentum, solid funding, and the goal of showing that a blockchain-based shooter can stand alongside top AAA games on gameplay alone.

Neon Machine developed this free-to-play 4v4 extraction FPS, which will launch on PC through the Epic Games Store. Players can also add it to their Steam wishlist. After years of limited playtests, funding rounds, leadership changes, and ongoing development, the Sacrifice Zone is finally open to all.

This launch is important. The Web3 gaming industry needs a strong example to lead the way, and Shrapnel wants to fill that role.

What Is Shrapnel? A Competitive Extraction FPS First

Shrapnel is a team-based 4v4 first-person shooter set in 2038. Lunar meteorites from asteroid 38 Sigma have devastated parts of Earth, creating a quarantined territory known as the Sacrifice Zone. Inside it lies Sigma, a volatile resource that powers advanced tech and fuels geopolitical conflict.

Players control elite Operators deployed into timed matches built around a focused gameplay loop:

Drop into a live battlefield.

Secure Sigma from meteor impact sites.

Survive dynamic hazards like meteor showers.

Deposit Sigma at Grav-Sync stations.

Trigger squad abilities and control objectives.

Extract or outscore the opposing team.

Shrapnel is different from other extraction shooters because it allows unlimited respawns. This makes the action faster and encourages teams to take more risks, leading to more frequent fights. Sigma acts as both a currency and a source of pressure. If you carry too much, enemies can spot you on their radar. Playing aggressively can give your team more advantages.

Playtest coverage and gameplay videos show that Shrapnel’s gunplay feels closer to Battlefield or Halo than to slower tactical shooters. Players who aim well are rewarded, and movement is quick and responsive. The developers adjusted weapon recoil in late 2025, showing they listened to player feedback.

At its core, Shrapnel is a competitive FPS that uses Web3 technology. It isn’t just a token system with shooting added.

Neon Machine: Leadership, Experience, and Industry Credibility

The team’s credibility is key. Neon Machine is not just another unknown crypto startup. It’s a Seattle studio, founded in 2022 by industry veterans who have worked on major console and PC franchises.

Leadership includes:

Ken Rosman (CEO) – Xbox veteran with experience on Halo Wars and large-scale multiplayer systems.

Don Norbury (Head of Studio) – Credits across Star Wars and Bioshock.

Aaron Nonis (COO) – Emmy winner for Westworld VR with Microsoft and HBO background.

The team has worked on games such as Call of Duty, Counter-Strike: Global Offensive, Ghost of Tsushima, Ghost Recon, Forza, Madden, and Star Wars. This experience matters. Building large multiplayer games takes skill in networking, live-service operations, balancing, anti-cheat systems, and community management.

The studio has faced public challenges too. In 2024 and 2025, there were reports of investor disputes and company restructuring. Leadership changed, a lawsuit was settled, and there were rumors of layoffs. Many other Web3 projects might not have survived these problems.

Instead, Neon Machine secured an additional $19.5 million in August 2025, bringing total known funding near $40 million. Development blogs continued monthly. Weapon systems improved. UI clarity increased. Economy mechanics tightened. That consistency builds trust.

Funding and Financial Backing

Shrapnel’s capital stack reflects serious institutional confidence:

$20M Series A (October 2023) led by Polychain Capital, with participation from Griffin Gaming Partners, Brevan Howard Digital, Franklin Templeton, and IOSG Ventures.

$19.5M raise (August 2025) earmarked for global launch execution and partnerships.

Institutional investors rarely back early prototypes. They prefer live-service products that can grow. This funding gave Neon Machine time to improve gameplay instead of rushing the launch.

If you’re looking at how durable the project is, having deep funding lowers the risk of running out of money soon. This is an important sign of trustworthiness in Web3 gaming.

Extraction Packs: Early Access, Competitive Events, and NFT Utility

During 2024, access to Shrapnel required purchasing an Extraction Pack. These packs funded development and stress-tested infrastructure during Shrapnel Training Exercises (STX).

Pack Tiers and Value

Light – $19.99

Entry cosmetics

Basic Operators

Limited event access

Medium – $49.99

Enhanced Operators

Expanded cosmetic access

Boosted event rewards

Heavy – $99.99

Premium Operators

STX events awarded up to $100,000 in SHRAP tokens. Over 25 days of structured events in 2024 generated 3.7 million sessions and 37 million shots fired.

These numbers show that players are genuinely engaged, not just following hype.

With Q1 2026 launch, the game becomes free-to-play. Extraction Packs won’t gate access anymore. However, holders are expected to receive post-launch utilities and bonuses tied to Operator NFTs.

This approach rewards early supporters and keeps the game fair for all players.

Blockchain Integration

Shrapnel runs on Avalanche as a dedicated Layer-1 chain and uses the SHRAP utility token. Still, crypto is optional.

Players can:

Linking a wallet is optional. This makes it easier for new players to join and helps bring more people into the game.

Marketplace and Crafting Systems

The Shrapnel Marketplace supports peer-to-peer trading of:

Weapon skins

Operator NFTs

Callsigns

Schematics and Fragments

Crafting mechanics include:

During 2024 tests, 384,000 items sold through marketplace activity. That indicates liquidity, experimentation, and economic engagement.

Shrapnel maintains a “free-to-win” structure. Skill, teamwork, and map awareness determine outcomes. Cosmetic ownership doesn’t grant stat advantages.

Playtest Feedback: Strengths and Weaknesses Addressed

Independent reviewers and community testers praised:

Criticism centered on:

Neon Machine responded publicly through dev blogs from September 2025 to February 2026. They recalibrated recoil systems, improved audio loops, refined ability clarity, strengthened controller support, and tightened anti-cheat integration. This ongoing feedback and improvement show the team knows how to run a live-service game.

How to Play Shrapnel at Launch

Step 1: Wishlist

Add the game on Epic Games Store or Steam. Wishlist placement affects visibility and algorithmic ranking, so early community action matters.

Step 2: Download at Launch (Q1 2026)

Install via the Epic launcher. Windows 10/11 supported. Controller support included.

Step 3: Enter the Sacrifice Zone

Queue into 4v4 matchmaking.

Secure Sigma from impact craters.

Avoid meteor strikes.

Deposit at Grav-Syncs to activate abilities.

Control objectives or extract strategically.

Step 4: Engage with the Economy (Optional)

No purchase is required to compete.

What Happens After Launch?

A successful launch is just thestartg for a live-service shooter. Keeping players, balancing the game, and releasing new content will dterminde if Shrapnel lasts beyond its first year. The game’s roadmapreflects thist. Based on what the developers have shared, how they’re spending funds, and common FPS standards, here’s what to expect after launc.h.

Immediate Phase: Stabilization, Utility Activation, and Mode Expansion

The first 30 to 60 days will focus on making sure the infrastructure is reliable and activating new features.

Global Matchmaking Activation

During earlier tests, server access remained regionally limited and event-gated. At launch, matchmaking becomes fully open and persistent. That shift does several things:

Expands concurrent player capacity.

Improves matchmaking MMR accuracy over time.

Reduces queue times.

Enables data-driven balance adjustments.

Competitive shooters live or die by match quality. Early weeks will generate telemetry on:

That data will be used for balance patches. Expect quick updates during this period.

Operator NFT Utility Goes Live

Operator NFTs haven’t functioned at full utility yet. Post-launch activation is expected to introduce:

In-game progression benefits tied to ownership.

Cosmetic or reputation signaling systems.

Event-based eligibility bonuses.

Possible reward distributions to early holders.

How this is handled matters. Any NFT benefits should be cosmetic or related to progression, not affect who wins. Neon Machine has said the game is “free-to-win,” so NFT perks will likely stay cosmetic and not give players extra power.

Activating utility also tests the blockchain layer under live user load. That’s a technical milestone as much as a gameplay one.

Teased New Game Mode Rollout

A new mode shortly after launch serves two purposes:

Prevent early player fatigue.

Showcase design flexibility.

Based on existing mechanics, plausible expansions include:

A limited-respawn competitive mode.

A higher-stakes extraction variant.

A rotating event-based Sigma storm mode.

Objective-heavy territory control.

Live-service shooters often introduce secondary modes to test engagement elasticity. If retention spikes after a new mode release, it validates expansion pacing.

Near-Term Phase

Months two through six define whether Shrapnel transitions from launch curiosity to ecosystem anchor.

Additional Maps

Extraction shooters depend on map knowledge depth. Early playtests showed strong layout fundamentals but limited environmental diversity.

New maps will need to introduce:

Good map design keeps players interested. A new map can change which weapons and team setups work best. If Neon Machine releases maps at the right pace, they can keep the game fresh without upsetting the balance.

Ability Expansions

Operator abilities create strategic differentiation. Early builds leaned heavily on core extraction mechanics. Post-launch expansions could introduce:

Area denial mechanics.

Temporary vision disruption tools.

Defensive fortification abilities.

Mobility-based Sigma theft strategies.

New abilities must integrate cleanly with existing Sigma risk mechanics. Overpowered ability cycles can destabilize competitive ecosystems. Expect controlled rollout with rapid balance patches if needed.

Creator Tools for User-Generated Cosmetics

User-generated content represents a major long-term differentiator. If Creator Tools launch successfully, players could:

This system helps players feel invested in the game, not justbyn playing butbyn creating. The best live-service games let players express themselves, and cosmetic items often stay popular even after thefirstl excitement fade.s.

Moderation and quality control will be critical. Open cosmetic pipelines require strong curation to maintain visual coherence.

SHRAP Staking Mechanics

Token staking introduces economic participation. However, it must avoid overshadowing gameplay.

Staking may involve:

Governance voting rights.

Event participation tiers.

Cosmetic reward unlocks.

Community treasury influence.

The game’s economy should not affect who wins or loses. If staking gives players an advantage in gameplay, it could cause problems and upset the community.

Execution discipline here reinforces credibility.

Long-Term Phase: Sustainability and Expansion

The first year will show if the game can last. The second year will shape its legacy.

Seasonal Content Cadence

Seasonal frameworks create structured engagement cycles. Expect:

Structured seasons help manage changes in the game’s meta. They also provide regular content updates, which help keep players coming back.

Being consistent is more important than flashy updates. If the team misses seasonal content releases, players may lose trust.

China-Compliant Deployment

A localized, regulation-compliant version expands market reach significantly. China represents one of the largest FPS audiences globally.

Deployment requires:

If done well, expanding into China will bring in new revenue and make the game less dependent on Western players.

Esports Expansion (Conditional on Adoption)

Competitive viability depends on:

Spectator clarity.

Balance stability.

Server reliability.

Community scale.

Extraction shooters are usually hard to follow when streamed or broadcast. Shrapnel’s 4v4 format is easier to watch than larger battle royale games.

If player counts justify it, Neon Machine could support:

Structured tournament circuits.

SHRAP-funded prize pools.

Third-party esports integrations.

Creator-led competitive leagues.

However, esports plans should come after the game has a strong competitive scene. Starting leagues too early, before enough players are interested, usually doesn’t work out.

Strategic Outlook

Shrapnel’s roadmap reflects a measured live-service strategy:

Stabilize infrastructure.

Activate NFT utility without compromising fairness.

Expand content deliberately.

Grow economy cautiously.

Scale globally.

Support competition if demand materializes.

How well the team executes will shape the game’s future. Strong funding gives them time, and experienced leaders understand multiplayer games. In the end, the community will decide if Shrapnel becomes a lasting FPS series or just a big experiment.

The launch is only the beginning. Keeping players engaged will shape the rest of the story.

Why Shrapnel’s Launch Is a Defining Moment for Web3 Gaming

Many blockchain games prioritized token mechanics over gameplay quality. Shrapnel reverses that order. It places competitive FPS fundamentals first and integrates digital ownership as an optional layeThe studio has strong funding, veteran leadership, measurable playtest engagement, and has addressed community criticism openly.

If Shrapnel sustains stable servers, fair matchmaking, and meaningful content cadence after Q1 2026, it won’t be judged as a “crypto game.” It’ll be judged as a shooter.

This difference will decide if the launch is a major milestone or just another missed chance.

The Sacrifice Zone will open again soon, and now everyone can join in.



Source link

What Mark Zuckerberg’s metaverse U-turn means for the future of virtual reality – Hypergrid Business

0
What Mark Zuckerberg’s metaverse U-turn means for the future of virtual reality – Hypergrid Business


(Image by Maria Korolov via Adobe Firefly.)

Mark Zuckerberg’s vision for the metaverse was meant to reimagine how we interact with each other and the world, providing us with an immersive world where we could seamlessly combine digital and physical information.

The parent company, renamed Meta along the way, had begun introducing headsets and reimagining everyday computing with its Project Orion augmented-reality glasses.

Now, however, Meta is making deep budget cuts to its Reality Labs division, which could see around 10% of the 15,000 employees working on the metaverse and related projects lose their jobs. Meta’s chief technology officer Andrew Bosworth confirmed the cuts to staff in a memo on January 13.

But my years of research with colleagues suggest this apparent U-turn is far from the end of the road for the technology. In search of commercial applications that stretch beyond gaming, it is, though, likely to signal a shift from virtual reality (VR) to less-immersive ways to merge the digital and real worlds.

This augmented reality approach has already been realised through products such as Microsoft HoloLens, which present virtual information within an optical see-through display.

Such augmented reality devices give the illusion of virtual information appearing in physical 3D space. They can also allow users to interact through both gestures and gaze, using integrated hand- and eye-tracking technology.

Meta’s CEO Mark Zuckerberg explains the metaverse in 2021. Video: Skarredghost.

The problem with virtual reality

After decades of research and development, VR technology is unquestionably a real product serving real needs. State-of-the-art headsets provide users with impressive immersive 3D environments along with integrated robust hand and eye tracking. Beyond gaming, virtual reality is used to train medical doctors, engineers, pilots, and many others.

But there is a conflict when it comes to more general, day-to-day applications. I and many others believe that with the advent of AI, new interfaces will be needed beyond the mobile phone to control and benefit from the applications in the work and home. At the same time, it is clear from our research that many people find VR headsets just too immersive, unsettling and impractical to use.

In a two-week user study in 2022, we compared working in virtual reality for an entire working week – five days in a row, eight hours each day – against a baseline of performing the same work using a standard setup with a regular display, external keyboard and mouse.

In this study, we asked 16 volunteers to do their ordinary office work, such as word processing, programming, creating spreadsheets, and so on. The headline result was that users could work in virtual reality for an entire workweek – but there were lots of issues in doing so.

Study participants using VR experienced a higher perceived workload as well as lower usability, lower perceived productivity, higher frustration, lower wellbeing, higher anxiety, a greater experience of simulator sickness and higher visual fatigue. In short, VR yielded worse outcomes on all key metrics.

Despite these results, in the interviews participants commented that they could see themselves using VR if headsets were lighter and if exposure to virtual reality was limited to a few hours at most.

In a follow-up research paper in 2024, we examined the video evidence we had collected in the study in detail. It showed what participants did while wearing the headset – adjusting it, managing the cable when it got in the way, eating and drinking by lifting up the headset halfway, receiving phone calls, and rubbing their faces.

Our analysis showed people did gradually get used to the VR headsets. Overall, participants adjusted their headset about 40% less frequently towards the end of the workweek, and removed the headset approximately 30% less frequently.

This tells us it is possible to work in virtual reality as we normally work with a physical desktop, keyboard and mouse. But if we arrange it so the VR setup replicates our ordinary setup then VR, unsurprisingly, performs worse. We are asking a virtual environment to perfectly replicate our physical work environment, which is impossible.

More importantly, it tells us something about trade-offs. Virtual reality provides a fully immersive virtual environment that transports users to completely different virtual worlds. But this has to be balanced against negative qualities such as poor ergonomics, nausea and fatigue.

Superhuman powers

For any form of extended reality – from augmented-reality smart glasses to something much more ambitious – to achieve mainstream success, it needs to provide more positive than negative qualities in relation to devices we are already familiar with, such as laptops, tablets and phones.

The solution, I believe, is to be bold and reimagine extended reality – not as a transplantation or extension of devices we already use in our daily lives, but as a medium that bestows us superhuman powers. In particular, it can enable us to seamlessly interact with computing systems in the 3D space around us.

In physical reality, you have to select a tool to use it: you pick up a spraycan, then push a button to spray-paint. In a desktop interface, you click the spraycan icon and can thereafter spray-paint using a mouse. But in extended reality, there is no need to first select the tool in order to use it – you can just do it with hand gestures.

Extended reality ‘hot gestures’ can be used to control digital tools . Video: University of Cambridge.

Simply by forming your hand as if you were holding a spraycan and pushing down your index finger to spray, the system can automatically recognise that you wish to use the spraycan tool. It will then allow you to spray-paint the digital items, modulated by your index finger pushing down a virtual spraycan button.

Extended reality can also provide a medium for interacting with personal robotics by, for example, showing the robot’s future movements in 3D space in front of us. As artificial intelligence becomes increasingly embedded in our physical reality, this will become more important.

Ultimately, any vision of a metaverse (not just Zuckerberg’s version) will only succeed if it goes beyond current user interfaces. Extended reality must embrace the fact that it allows a seamless blending of virtual and physical information within our 3D world.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article here.



Source link

Playnance Expands ‘Be The Boss’ Program, Enabling $1 Web3 Gaming Platform Launches | NFT News Today

0
Playnance Expands ‘Be The Boss’ Program, Enabling  Web3 Gaming Platform Launches | NFT News Today


Playnance has expanded its Be The Boss partner program through PlayW3, giving users the ability to launch branded Web3 social casino platforms for $1. The program provides instant activation, automated daily on-chain payouts, and full backend infrastructure without requiring technical expertise.

Key Takeaways

Users can launch a branded Web3 social casino for $1

Platforms operate on a 50/50 revenue share model

Daily automated on-chain payouts go directly to partners’ wallets

1,500+ partners have joined, with $1.9 million paid out

A $250 million partner pool has been allocated

1,500+ Platforms Already Live Worldwide

The Be The Boss program is currently operating across global markets. According to Playnance, more than 1,500 partners are actively running platforms, and over $1.9 million has been paid out so far.

Each participant, referred to as a “Boss,” receives a fully operational social casino platform hosted under a unique subdomain. Activation takes only minutes, allowing partners to begin operating immediately. There is no technical setup required and no onboarding process to complete. The company describes the $1 entry fee as ‘symbolic’.

Playnance describes the structure as an alternative to traditional affiliate or referral models. Rather than earning commissions from traffic alone, partners operate a complete digital platform supported by the company’s proprietary blockchain infrastructure.

The program runs on a 50/50 revenue share model, which Playnance says is among the highest in the industry. Earnings are distributed through automated daily on-chain payments sent directly to partners’ wallets. In addition, a $250 million partner pool has been allocated to support long-term participation as the network continues to expand and onboard new communities.

Founded in 2020, Playnance develops non-custodial, on-chain platforms designed to combine familiar Web2-style user experiences with blockchain-based infrastructure. The company focuses on reducing friction between traditional user behavior and blockchain execution.

10,000+ On-Chain Games and G Coin Integration

Each platform includes access to more than 10,000 on-chain social casino games. Other offerings include social prediction markets, sports-based social events, crash-style games, interactive financial markets, cash tournaments, jackpots, and built-in bonus and retention systems.

Blockchain settlement, payout processing, player support, and technical operations are handled through PlayW3. This allows partners to focus on community growth and user engagement while backend systems remain centrally managed.

At the center of the ecosystem is G Coin, the utility token used for gameplay activity, rewards, and daily earnings distribution. As more Boss platforms go live and onboard users, activity across PlayW3 increases, expanding token usage across the broader network.

According to Pini Peter, CEO of Playnance, the goal is to make platform ownership “accessible and practical,” adding that the model is already live and “operating at scale, and driven by engagement rather than hype.”



Source link

Bitget Wallet Appoints BCG Veteran Will Wu As Head Of Asia-Pacific

0
Bitget Wallet Appoints BCG Veteran Will Wu As Head Of Asia-Pacific


In Brief

Bitget Wallet has appointed Will Wu as Head of Asia-Pacific to lead regional expansion and localized execution as the company shifts toward a broader everyday finance platform.

Bitget Wallet Appoints BCG Veteran Will Wu As Head of Asia-Pacific

Bitget Wallet, an everyday finance application, has appointed Will Wu as Head of Asia-Pacific, assigning him responsibility for advancing regional growth, strengthening partnerships, and overseeing localized go‑to‑market execution as the company expands its presence in one of the most active global regions for digital wallets and onchain financial activity, particularly across rapidly developing emerging markets.

The appointment reflects Bitget Wallet’s broader strategy to scale financial services that align with local usage patterns as cryptocurrency adoption moves beyond trading and into routine financial behavior.

This leadership change coincides with Bitget Wallet’s accelerated transition from a trading‑oriented crypto wallet to a more comprehensive everyday finance platform that incorporates stablecoin‑based payments, earning features, and real‑world spending options. Within the Asia-Pacific region, the wallet has expanded its support for card‑based crypto payments and QR‑enabled transactions, mirroring the region’s mobile‑first and wallet‑centric financial landscape.

Industry data from Worldpay indicates that mobile wallets have surpassed cards as the dominant online payment method across most markets in the region, highlighting the growing importance of localized payment interfaces as QR‑based transactions increasingly replace traditional card rails in both online and in‑store environments.

Asia-Pacific Emerges As The Global Epicenter Of Digital Wallet And Onchain Finance Activity

Asia-Pacific remains the global leader in digital wallet usage, accounting for nearly two‑thirds of worldwide digital wallet spending according to Deloitte. The region is also the most active market for onchain finance, with Chainalysis data showing that Asia-Pacific consistently processed more than $185 billion in monthly onchain value through mid‑2025, often exceeding North America, while estimates suggest it represented roughly one‑third of global cryptocurrency revenue in 2025.

“Asia-Pacific is shaping how crypto is used in practice, not just how it’s traded,” Wu said. “Digital wallets are already embedded in everyday behavior across the region, especially in emerging markets, creating a natural pathway for onchain finance to move into payments, savings, and daily money management. The focus now is on building infrastructure that aligns with local habits while remaining connected to the global onchain economy,” he added.

In his new role, Wu will oversee Bitget Wallet’s Asia-Pacific strategy, guiding regional expansion, partnership development, and localized market execution. He brings more than a decade of experience in strategy and international growth, having previously led global expansion initiatives at major cryptocurrency exchanges and advised companies on market entry and scaling strategies during his tenure at Boston Consulting Group.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

More articles

Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

More articles



Source link

China Clears Offshore RWA Tokenization, Boosting Investor Interest

0
China Clears Offshore RWA Tokenization, Boosting Investor Interest


Key Highlights

China clears offshore RWA tokenization, giving firms a legal path to issue tokens backed by domestic assets.

Crypto remains tightly controlled; Chinese firms can’t issue stablecoins abroad, and Bitcoin or Ethereum lack legal tender status.

Hong Kong could become a hub for China’s digital asset experiments, including potential gold-backed tokens challenging the U.S. dollar.

China is opening a new chapter for real-world asset (RWA) tokenization, causing a stir among global investors. At the Consensus Hong Kong event, Joseph Chee, Executive Chairman of Solana, stated, “You can use onshore assets, tokenize them, and do your RWA crypto business outside of China. All you have to do is make a filing.” 

The chairman’s remarks come as investors in Hong Kong and China rushed to buy stocks linked to real-world assets. They are betting that Beijing will soon formalize rules for offshore RWA tokenization. 

While China still blocks these activities domestically, authorities now allow companies to issue tokens overseas using Chinese assets as backing. This opens a clear and legal path for businesses and investment banks to grow in blockchain-based asset management.

Implications for RWA market and technology providers

As per Reuters, Guosen Securities described the guidance as a milestone for China’s RWA sector, noting it turns “unregulated growth” into “a race for compliance.” Firms with blockchain expertise and cross-border operations can now target new business opportunities. Moreover, non-compliant projects are likely to be phased out, creating space for technology companies offering compliant data management and tokenization services.

Apart from the focus on RWA tokenization, the announcement highlights China’s careful approach to cryptocurrencies. The central bank and seven other regulators recently made it clear that digital coins like Bitcoin and Ethereum cannot be used as official money in the country. 

They also said Chinese firms cannot issue yuan-backed stablecoins overseas without approval, stressing strict oversight. Authorities listed several forbidden activities, including trading crypto, swapping one digital asset for another, or providing price and info services. They warned that breaking these rules could be considered illegal financial activity.

Global context and strategic considerations

U.S. Treasury Secretary Scott Bessent highlighted potential strategic implications, suggesting China might explore gold-backed digital assets. He said, “They have a very large sandbox in Hong Kong, and the [Hong Kong Monetary Authority] is actively traveling the world, looking at different mechanisms.” 

While speculative, this shows Beijing may diversify beyond the yuan for digital asset support, potentially challenging the U.S. dollar’s global dominance.

China’s new rules for RWA give companies legal clarity and a chance to operate offshore, while crypto activities stay tightly controlled. Firms with blockchain and compliance know-how are likely to benefit the most.

Also Read: Trump Coins Sucked the Liquidity Out: Scaramucci Blames Political Meme Coins

Disclaimer: The information researched and reported by The Crypto Times is for informational purposes only and is not a substitute for professional financial advice. Investing in crypto assets involves significant risk due to market volatility. Always Do Your Own Research (DYOR) and consult with a qualified Financial Advisor before making any investment decisions.



Source link

The Future of Surgical Visualization Is Spatial

0
The Future of Surgical Visualization Is Spatial


Surgery has always evolved alongside technology. From imaging breakthroughs to robotic assistance, each wave of innovation has expanded what is possible inside the operating room.

Yet one limitation has remained constant. Surgeons still interpret deeply spatial anatomy through flat screens.

SurgiSpace was created to challenge that paradigm.

SurgiSpace is an immersive spatial anatomy visualization experience that reimagines how surgeons and medical professionals explore, understand, and prepare for complex procedures. Instead of viewing anatomy, users step into it. Instead of rotating models on a monitor, they interact with life-sized structures in their physical environment.

This is not simply a 3D model in a headset. It is an early expression of what spatial and intelligent surgical environments can become.

The shift from interpretation to immersion

Modern surgical planning depends heavily on CT and MRI scans reconstructed into 3D models and reviewed on screens. While powerful, these systems still require the brain to translate visual cues into depth, proximity, and spatial relationships.

That translation takes effort.

As procedures grow more complex and minimally invasive techniques demand greater precision, the cognitive load placed on surgeons continues to increase. The need is not just for more data. It is for better spatial understanding.

The next step forward is not more imaging. It is immersive comprehension supported by intelligent systems that adapt to clinical context.

Rebuilding anatomy in space

SurgiSpace brings torso anatomy into a full-scale, interactive spatial environment. Using Unity and spatial computing frameworks, the experience allows surgeons to explore anatomy at life size within their real-world surroundings.

The anatomical model floats naturally in space. Users can walk around it, scale it, rotate it, and examine relationships from any angle. Layers can be isolated to study skeletal structures, organs, and vascular systems independently. A dynamic slicing tool reveals internal cross-sections in real time.

Spatial torso model

Planning Mode transforms exploration into structured intent. Non-relevant structures subtly fade, target zones highlight, and guided overlays support systematic review. Annotation pins and measurement tools enable contextual insight without breaking immersion.

What makes this direction powerful is not just visualization. It is the foundation for intelligence. Spatial environments like SurgiSpace can be enhanced with AI-driven capabilities such as automated anatomical segmentation, contextual highlighting of risk zones, predictive guidance overlays, and adaptive training scenarios based on user interaction patterns.

The experience moves from static viewing to responsive assistance.

Enabling special medical computing – Watch the demo

Beyond visualization: Toward intelligent surgical environments

In medical training, immersive spatial visualization can reshape how anatomy is taught. AI-enhanced modules can dynamically adjust difficulty, introduce scenario-based complexity, and analyze trainee interaction patterns to improve learning outcomes.

In surgical simulation, intelligent spatial systems can evolve beyond static models to incorporate patient-specific data, automated structural identification, and real-time contextual prompts.

In pre-operative planning, future integrations may allow spatial environments to respond to imaging data, highlight anatomical constraints, or simulate procedural pathways.

This is where immersive technology becomes not just spatial, but cognitive.

The broader vision

Spatial computing represents a foundational shift in how humans interact with digital information. Healthcare, particularly surgical planning and training, stands to benefit profoundly from this shift.

At TILTLABS, we believe immersive systems must go beyond visual novelty. They must enhance understanding, reduce cognitive friction, and integrate intelligence into real clinical workflows.

SurgiSpace reflects this philosophy. It combines immersive design, spatial interaction, and a forward-looking AI-ready architecture to demonstrate how next-generation visualization tools can support surgical insight.

As immersive technologies converge with artificial intelligence, real-time imaging, and digital twin frameworks, surgical environments will become more adaptive, more contextual, and more intelligent. The future of surgical planning will not be confined to screens. It will exist in space and evolve with the clinician.

Experience the future of surgical visualizations

SurgiSpace offers a forward-looking perspective on how immersive and AI-enabled spatial technologies can augment surgical training and planning.

If you are exploring spatial computing in healthcare, next-generation visualization systems, or intelligent clinical training platforms, we invite you to experience SurgiSpace firsthand.

Request a live demonstration and discover how anatomy transforms when it moves from the screen into space.

The post The Future of Surgical Visualization Is Spatial appeared first on TILTLABS.



Source link

3D Assets

0
3D Assets


The metaverse can be understood as a persistent, interconnected digital ecosystem where people interact with content and each other through three dimensional experiences. Unlike traditional digital media, which is dominated by flat screens, 2D interfaces, and pixel based imagery, the metaverse envisions environments that behave more like the physical world, where interaction is not confined into 2D windows, menus, and pages.

3D Assets Working Group focuses on facilitating the industry-wide transition from today’s predominantly 2D digital content to a future where 3D becomes the default medium for users. This migration introduces a set of challenges involving diverse rendering engines and formats, heterogeneous devices and hardware, spatial interaction models, and new considerations for performance, visual fidelity and interoperability.

The working group’s mission is to identify the challenges affecting the transition and reduce friction by facilitation communication between different standards developing organizations and aligning requirements for an interoperable Metaverse. By collaborating across the ecosystem, the group aims to lower barriers to adoption, streamline workflows, and support the broad vision that 3D will become the foundation of next generation digital media.

3D Assets Working Group focuses to tackle the interoperability challenges on the following fronts:

Bridging communities and technologies—including USD, glTF, volumetric media standards, and avatar ecosystems—to enable smooth exchange of assets and content across different tools, applications, platforms, and industries.
Identifying common use cases and requirements across sectors such as gaming, entertainment, education, social media, health, manufacturing, fashion, and more, ensuring that future standards address real-world needs.
Harmonizing fragmented efforts across multiple standardization bodies and industry initiatives by studying overlaps and facilitating inter SDO communication.

 



Source link

ATEK Grid is back – Hypergrid Business

0
ATEK Grid is back – Hypergrid Business


Atek Grid. (Image courtesy Atek Grid.)

We are pleased to formally announce that ATEK Grid is officially back online at atekgrid.com.

Over the past several weeks, we have quietly brought the platform back up and stabilized core systems. With that groundwork complete, our focus now turns to rebuilding ATEK Grid as a technology platform designed to support the OpenSim community and the broader metaverse ecosystem through practical tools, infrastructure, and collaboration.

Our mission moving forward is clear. We intend to contribute where it matters most. This means developing and sharing applications, infrastructure tools, and modern technology that OpenSim grids and developers can actually use. We are not trying to reinvent the wheel or compete with existing communities. Our goal is to strengthen what already exists and help move the ecosystem forward in a sustainable and cooperative way.

As part of this effort, we are actively building and advancing a growing set of applications intended to support creators, operators, and users across Second Life and OpenSim-based worlds.

Game TokensA reward-based system built around participation, contribution, and fair value exchange. It is inspired by the fairshare.social concept and emphasizes incentives that benefit communities rather than extractive or purely transactional models.

Atek Grid. (Image courtesy Atek Grid.)

Real MatchA metaverse-native dating and social connection application designed specifically for OpenSim and Second Life users with a focus on identity, consent, and meaningful interaction across platforms.

Metaverse JobsAn application focused on helping users find work opportunities across Second Life and OpenSim grids. It connects creators, venue owners, grid operators, and service providers through a centralized and transparent system.

VMS Venue Management ServicesVenue operations technology designed for clubs and event spaces including scheduling, automation, reporting, in world devices, and HUD based tools that support DJs, hosts, managers, and venue owners.

Atek Grid. (Image courtesy Atek Grid.)

 

Music FactoryA talent agency platform supporting DJs, singers, and hosts. It provides structured representation, scheduling coordination, and operational tools for performers working across multiple venues and grids.

All of these projects are being developed as modular and ecosystem-friendly tools. They are designed to integrate with existing grids and communities rather than replace them. Operators and developers can adopt only what is useful to them.

In addition to platform development, ATEK Grid is also interested in funding and supporting projects focused on AI-driven systems and NPC development. We see strong potential in intelligent agents, automation, and immersive non-player characters that enhance virtual world experiences in meaningful and practical ways. Our goal is to foster development that aligns with real community needs rather than speculative or novelty driven concepts.

ATEK Grid should be viewed as a technology platform first. It is a place where ideas are built, tested, refined, and shared with the wider metaverse. We believe the future of OpenSim depends on solid infrastructure, open collaboration, and thoughtful innovation.

We are pleased to share that several members of our original team have returned as part of this relaunch. In addition, Josh Boam, CEO of  Hosting4Opensim, is now running our servers and infrastructure. This provides a strong and reliable technical foundation as we move into the next phase.

As always, we remain open to collaboration. We welcome opportunities for joint development, technical contributions, media coverage, and partnerships wherever they bring value to the broader metaverse community.

It was important to communicate this directly and transparently. We look forward to building again with a renewed focus on contribution, stability, and long-term value for OpenSim and the communities that depend on it.

Frank Corsi is the owner of the Atek Grid and has headed up numerous other virtual world projects.
Latest posts by Frank Corsi (see all)



Source link

Popular Posts

My Favorites