Overview Energy, aiming to transmit energy from space with lasers, plans to place solar panels in orbit to transform existing solar power plants into facilities that produce electricity at night as well.
Overview Energy, aiming to redefine solar energy, has taken the stage with an ambitious project aiming to transform existing solar power plants on Earth into facilities that generate electricity at night as well. The company aims to send the energy it collects in orbit to Earth.
The company’s approach is based on collecting continuous sunlight in space with large solar panels to be placed in geosynchronous orbit approximately 35,000 km above Earth and transmitting this energy directly to large-scale solar power plants on Earth via infrared lasers. In this way, nearly 24-hour energy production independent of daylight is targeted.
First Orbit Test in 2028
The startup has raised $20 million in investment to date, and a portion of this budget was used for an aerial demonstration of the technology. A laser system mounted on a light aircraft demonstrated the system’s viability by transmitting energy to a ground-based receiver from a distance of 5 kilometers. Although space-based energy systems have been seen as science fiction for many years, the significant drop in space launch costs over the last decade has brought this field a little closer to commercial reality. However, there are still significant hurdles to overcome. Sending giant solar panels to geosynchronous orbit is still much more costly than installing panels on Earth. Additionally, wireless energy transmission from space to Earth is also in the early development stage.
The company aims to send its first test satellite to low Earth orbit in 2028, and then commission its final system in geosynchronous orbit in 2030 to start megawatt-level energy transmission.
Not a Competition-Free Environment
There are other companies developing similar solutions. While Aetherflux works on a laser-based transmission technique like Overview, Emrod and the Orbital Composites/Virtus Solis teams adopt a different method based on microwaves. Microwaves are less affected by clouds and moisture, which reduces weather-related interruptions. However, since this technology cannot be integrated into existing solar farms, companies need to build their own new ground stations.
Overview Energy‘s advantage is offering easier integration by using existing solar farms as infrastructure. Nevertheless, the company needs to prove that energy beams sent from space are safe and will not deviate from the target. Also, the efficiency of the lasers is important, because losses during the conversion to infrared after energy is collected in space could eliminate all the advantages of the system.
For the past decade, the world has been obsessed with Artificial Intelligence. We worry about ChatGPT taking our jobs or Skynet taking our lives. But while we were looking up at the cloud, scientists were looking down into the petri dish.
The era of silicon computing is hitting a wall. Moore’s Law is slowing down, and our hunger for data is burning the planet. The world’s fastest supercomputers now require the energy of a small city just to run simulations. But nature solved this problem billions of years ago. The human brain operates on just 20 watts of power—barely enough to dim a lightbulb—yet it outperforms every machine we have ever built.
This realization has birthed a new, terrifyingly brilliant field of science: Organoid Intelligence (OI).
What is Organoid Intelligence?
Organoid Intelligence is not science fiction; it is happening right now in laboratories across the globe. Scientists are using stem cells to grow three-dimensional clusters of human brain tissue called “organoids.” These are not plastic chips; they are living, biological neural networks.
Unlike traditional AI, which simulates thinking using binary code (0s and 1s), organoids actually think using chemical and electrical signals, just like you do.
The “DishBrain” Experiment: It Can Play Games
The proof of concept arrived with a project known as “DishBrain.” Researchers at Cortical Labs connected a cluster of 800,000 living neurons to a computer interface and forced them to play the vintage video game Pong.
They didn’t write code to tell the cells how to play. Instead, they used a system of rewards and punishments. When the cells missed the ball, they received a chaotic, unpleasant electrical signal. When they hit the ball, they received a predictable, rhythmic signal.
In just five minutes, the biological tissue learned how to play. It adapted. It evolved. It learned faster than many AI reinforcement learning models. This wasn’t a simulation of learning; it was life struggling to understand a digital environment.
The Rise of “Wetware”
We are moving from “Hardware” and “Software” to “Wetware.”
Imagine a future data center. Instead of rows of humming metal server racks, you walk into a warm, humid room filled with glass canisters. Inside, millions of biological processors float in nutrient-rich pink fluid.
These biological supercomputers could revolutionize everything. They could run the Metaverse with intuitive, human-like NPCs (Non-Player Characters). They could solve medical mysteries that silicon computers can’t grasp. And they could do it all while consuming a fraction of the energy we use today.
The Ethical Nightmare
However, this technology opens a Pandora’s Box of ethical horror. If a computer is made of living brain tissue, does it have rights?
Can it feel pain? If we “punish” the processor with electrical noise to make it calculate faster, are we torturing a living entity?Is it conscious? At what point does a complex neural network become “aware”?The Slavery of the Neuron: Are we creating a race of biological slaves, bred only to process our data, trapped forever in a dark server rack without a body?
Conclusion
The transition from silicon to biology seems inevitable due to efficiency demands. But as we rush to build the ultimate computer, we must ask ourselves: Are we ready for our machines to be alive?
The line between man and machine isn’t just blurring; it is vanishing. The computer of the future won’t just sit on your desk. It might be a distant cousin of your own brain, living inside a box.
Published: December 13, 2025 at 7:00 am Updated: December 12, 2025 at 8:17 am
by Ana
Edited and fact-checked:
December 13, 2025 at 7:00 am
To improve your local-language experience, sometimes we employ an auto-translation plugin. Please note auto-translation may not be accurate, so read original article for precise information.
In Brief
By 2026, prediction markets have evolved into powerful forecasting tools, leveraging blockchain, regulation, and liquidity to provide real-time, sentiment-driven probabilities that shape expectations across crypto and broader events.
Prediction markets have become a powerful force in how crypto communities, investors, and institutions form expectations.
By 2026, improvements in blockchain infrastructure, regulatory clarity, and huge increases in liquidity have turned some of these markets into serious forecasting tools. Instead of relying on polls or expert predictions, people can now see real-time aggregated probabilities shaped by collective sentiment and money.
Below are ten of the most influential, widely used prediction markets shaping crypto forecasting and broader event predictions today — from decentralized Web3 protocols to regulated real-world event exchanges.
Polymarket, launched in 2020 on the Polygon blockchain, remains one of the largest decentralized prediction markets globally. Users bet on outcomes — from politics to macroeconomic events to crypto-relevant outcomes — using USDC, making participation relatively accessible.
By 2025, Polymarket’s cumulative trading volume had reportedly topped $7.5 billion. At its peak, monthly volume exceeded $1.16 billion. Its combination of decent liquidity, simple UX, and decentralized settlement makes it a go-to choice for traders seeking a crypto-native, permissionless platform to express views on future events.
Polymarket is often praised for its speed: markets are created rapidly, and new bets — even during fast-moving events like elections or economic data releases — reflect shifting sentiment almost in real time. For crypto users trying to gauge market mood around regulation, halving events, or macro shocks, Polymarket’s blending of traditional forecasting with crypto settlement offers a unique advantage.
Kalshi distinguishes itself from many crypto-native prediction markets by being a regulated real-money exchange. As of 2025, it has become one of the dominant players in global prediction-market volume.
According to recent data, Kalshi captured more than ~60% of global prediction market activity by September 2025. It offers binary outcome contracts on a wide range of real-world events — from macroeconomic data to major political outcomes to sports events — which appeals to institutions or users seeking regulated certainty rather than decentralized speculation.
Because its contracts settle via official data sources and clearinghouses, Kalshi provides clearer compliance and legitimacy than many purely on-chain platforms. This makes it particularly useful for users or funds looking to integrate prediction-derived probabilities into broader investment strategies. As mainstream interest grows, Kalshi’s rise underscores that prediction markets are evolving beyond niche crypto tools into recognized financial infrastructure.
Built within the Polkadot ecosystem, Zeitgeist offers a fully decentralized prediction market engine. It allows community-driven market creation, where users can propose, vote for, and trade predictions on real-world and crypto-native events. Its governance-based model aligns with Polkadot’s decentralized, multi-chain philosophy, making Zeitgeist a strong contender in forecasting on-chain events, protocol upgrades, or governance outcomes.
Because it’s on-chain and governed by its community, Zeitgeist represents the “pure Web3” ideal: no central clearinghouse, no intermediaries, and transparent rules. For users interested in forecasting crypto-native events — like token launches, network upgrades, or DeFi protocol moves — Zeitgeist provides a decentralized alternative to traditional prediction markets.
Gnosis is one of the oldest names in decentralized prediction markets. Through Omen, its community-driven front–end platform, users can create a wide variety of markets — from political forecasts to niche, crypto-ecosystem-specific questions. Omen and Gnosis have influenced how DAOs, NFT projects, and DeFi communities gauge sentiment and expectations.
Even though Gnosis/Omen may not always match the liquidity of giants like Polymarket or Kalshi, their strength lies in flexibility and community-driven design. For forecast-driven DAOs or decentralized projects needing tailor-made questions — such as “Will protocol X implement feature Y by date Z?” — Omen remains a reliable platform. Its long history and decentralized ethos continue to attract users who value governance, transparency, and Web3-native settlement.
Manifold Markets offers a different flavor of prediction markets — bridging trading and social forecasting. It combines a lightweight, social-style interface with prediction market mechanics. Users can create markets on any topic, including crypto-related questions, cultural events, macroeconomic outcomes, and public sentiment questions.
According to recent reporting, Manifold once attracted over 200,000 users, positioning itself as a kind of “community voting pool.”
While its daily active user numbers reportedly dipped in 2025, it remains a popular venue for social forecasting and sentiment measurement.
For crypto observers, Manifold’s appeal lies in its ability to surface retail sentiment and community expectations — which often precede viral market moves, meme-coin pops, or narrative-driven cycles. Because its barrier to entry is low and it encourages open participation, Manifold can act as an early-warning indicator or a gauge of “what the crowd thinks will happen.”
Augur was one of the first decentralized prediction markets in crypto. Originally launched in 2018, it introduced the concept of permissionless, user-created markets on Ethereum. While Augur’s initial DAO lost momentum, the protocol and its core infrastructure remain relevant, especially with renewed interest in on-chain, modular oracles and decentralized governance.
In 2026, Augur’s appeal lies not necessarily in massive liquidity, but in its architecture: open-market creation, decentralized settlement, and the ability to link predictions to smart contracts. That makes it suitable for forecasts tied directly to on-chain governance, protocol metrics, or decentralized applications — rather than external real-world data. For developers, protocol teams, or crypto-native users wanting full Web3 sovereignty, Augur remains a foundational building block.
While not always presented as a “classic” prediction market, SynFutures and similar expiry-based futures markets blur the line between derivatives trading and forecasting. Users can buy futures or directional bets on crypto prices, events, or market behaviors — effectively betting on outcomes rather than simply holding assets.
Some market commentators note that expiry or future-style contracts behave similar to prediction contracts: instead of owning a token long-term, you take a view on what will happen and profit if reality matches your expectation.
This model may attract traders more familiar with derivatives than betting, broadening the appeal of forecasting mechanisms in crypto. For price-sensitive traders or those seeking leveraged exposure to expected outcomes, these expiry-style markets function as prediction-adjacent tools.
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.
More articles
Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.
The entertainment and learning industries are undergoing a significant transformation. Amusement parks, museums, science centers, and family attractions are starting to evolve in ways that make them feel more connected than ever before. These destinations once served very different purposes, yet today they are being shaped by a shared expectation from audiences – the desire for more interactive, immersive, and emotionally engaging experiences.
This shift is not driven by a single trend but by a deeper change in visitor behavior. People now expect physical environments to respond to them in meaningful ways. Children move between digital and physical worlds effortlessly, and adults want experiences that feel active rather than passive. As a result, attractions can no longer rely solely on static displays or traditional rides. They need to engage visitors through interaction, storytelling, and real-time responsiveness.
Why traditional entertainment spaces started evolving
Over the past decade, operators across different sectors introduced small enhancements intended to modernize their spaces. Theme parks experimented with VR elements. Museums tested projection-based storytelling. Science centers added gesture-driven simulations. Family attractions introduced hybrid zones where physical play connected to digital layers.
These early experiments changed visitor expectations. Once people experienced an environment that responded to their movements or choices, they began to look for similar levels of engagement everywhere.
This gradual shift pushed four major industries — amusement, edutainment, culture, and family entertainment — toward a shared direction. Each sector started borrowing ideas and improving its visitor experience in similar ways, leading to a new type of destination that blends story, play, learning, and digital interaction seamlessly.
How modern visitors experience these new spaces
Today, entering an attraction feels fundamentally different from what it was a decade ago. Instead of following a fixed path, visitors are invited into environments that adapt to their presence.
A ride may integrate narrative elements that continue before and after the physical experience.A museum exhibition can feel like walking inside a reconstructed historical moment.A children’s learning zone may use real-time feedback to turn curiosity into guided exploration.A family adventure area may combine physical tasks with digital characters that remember your progress.
These experiences are not defined by age or industry category. They are defined by immersion, responsiveness, and the feeling that the visitor is an active participant.
The growing importance of the “immersive layer”
What truly unites these modern destinations is the emergence of a digital-interactive layer that sits on top of the physical environment. This layer is not always visible, but it determines how the space behaves.
It includes real-time 3D worlds, mixed-reality storytelling, projection-based environments, sensor-driven interactions, and digital systems that track progress or personalize the experience. It allows attractions to evolve continuously without completely rebuilding them. This makes destinations more adaptable, more engaging, and more capable of meeting rising audience expectations.
This immersive layer is now becoming essential for operators who want their destinations to feel current and relevant.
Where TILTLABS contributes to this industry transformation
As the demand for immersive experiences grows, the need for teams who can design this digital layer becomes critical. TILTLABS plays a key role in this evolution by creating real-time 3D environments, interactive systems, and mixed-reality extensions that enhance physical attractions.
We support a wide range of destinations — from amusement parks and family attractions to museums and science centers — by building the interactive foundation that makes these spaces feel alive. Our work helps operators transition from traditional, static formats to environments that invite exploration, participation, and emotional connection.
TILTLABS focuses on enabling destinations to evolve with the expectations of modern audiences. By integrating digital storytelling, gamified engagement, and immersive visual design into existing and new physical spaces, we help create the kind of experiences visitors now expect.
As global entertainment continues shifting toward immersion, these digital-interactive layers will define how destinations grow and stay relevant. TILTLABS is committed to supporting this transformation, helping create environments that blend learning, play, culture, and storytelling into one continuous, engaging experience.
The post How Digital Immersion Is Changing the Future of Destination Design appeared first on TILTLABS.
In the late 19th century, a chilling belief swept through Europe. Detectives and scientists alike were obsessed with a concept called “Optography.” The theory was simple but terrifying: they believed that the human eye worked exactly like a camera. If a person was murdered, the last thing they saw—the face of their killer—would remain frozen on their retina, like a developed photograph.
Scientists actually decapitated rabbits and examined the eyes of murder victims, desperately trying to develop these “retinal photos.”
Of course, we now know this was biologically impossible. The eye resets. The image fades. But the obsession behind it never died. Humans have always been desperate to capture the “final moment” and preserve human consciousness.
Fast forward to today, and we are no longer looking at dead eyes. We are looking at live brains.
The New Optography: Neuralink and the “Brain Backup”
What was once a Victorian ghost story is becoming Silicon Valley’s biggest project. Companies like Neuralink are developing Brain-Computer Interfaces (BCIs) that don’t just “read” the brain, but interact with it.
The science is shifting from observation to extraction.
Neuroscientists and futurists postulate a concept called Whole Brain Emulation (WBE). The logic is terrifyingly simple:
Your brain is essentially a biological computer processing electrical signals.If we can map every neuron and synapse (the connectome).We can, theoretically, “upload” that data to a machine.
We aren’t trying to capture a single static image on a retina anymore. We are trying to copy the entire operating system of a human being.
The Afterlife Has an IP Address: Enter the Metaverse
If you upload a human mind, where does it live? It can’t float in a void. It needs a world. It needs a body.
This is the ultimate destiny of the Metaverse.
Forget gaming or buying virtual land for a moment. Imagine a future where the Metaverse is the heaven we built for ourselves.
The Body: Your physical body dies, but your consciousness wakes up in a custom-designed 3D Avatar.The World: You aren’t limited by physics. You can fly, teleport, or revisit memories perfectly rendered in 3D.The Connection: You can attend your great-grandchild’s graduation, not as a ghost, but as a digital presence in the room via Mixed Reality glasses.
The “Ship of Theseus” Paradox
Here is the question that will keep you up at night, and the one science hasn’t answered yet:
If we scan your brain, map your memories, and upload “You” to the Metaverse Planet… is it really you? Or is it just a perfect copy of you, while the real you ceases to exist?
In the 19th century, they looked for ghosts in the eyes of the dead. In the 21st century, we are building ghosts out of code.
We are the first generation in history with a chance to choose our own afterlife. The question is: Are you brave enough to upload yourself?
I’m a developer who’s spent decades working with game engines and AI systems. And watching NPCs stand motionless in elaborate, carefully crafted virtual spaces felt like a waste. These worlds had 3D environments, physics, avatars, ambiance—everything needed for immersion except inhabitants that felt alive.
The recent explosion of accessible large language models presented an opportunity I couldn’t ignore. What if we could teach NPCs to actually perceive their environment, understand what people were saying to them, and respond with something resembling intelligence?
That question led me down a path that resulted in a modular, open-source NPC framework. I built it primarily to answer whether this was even possible at scale in OpenSimulator. What I discovered was surprising—not just technically, but about what we might be missing in our virtual worlds.
The fundamental problem
Let me describe what traditional NPC development looks like in OpenSimulator.
The platform provides built-in functions for basic NPC control: you can make them walk to coordinates, sit on objects, move their heads, and say things. But actual behavior requires extensive scripting.
Want an NPC to sit in an available chair? You need collision detection, object classification algorithms, occupancy checking, and furniture prioritization. Want them to avoid walking through walls? Better build pathfinding. Want them to respond to what someone says? Keyword matching and branching dialog trees.
Every behavior multiplies the complexity. Every new interaction requires new code. Most grid owners don’t have the technical depth to build sophisticated NPCs, so they settle for static decorations that occasionally speak.
There’s a deeper problem too: NPCs don’t know what they’re looking at. When someone asks an NPC, “What’s near you?” a traditional NPC might respond with a canned line. But it has no actual sensor data about its surroundings. It’s describing a fantasy, not reality.
Building spatial awareness
The first breakthrough in my framework was solving the environmental awareness problem.
(Image courtesy Darin Murphy.)
I built a Senses module that continuously scans the NPC’s surroundings. It detects nearby avatars, objects, and furniture. I’s measuring distances, tracking positions, and assessing whether furniture is occupied. This sensory data gets formatted into a structured context and injected into every AI conversation.
Here’s what that looks like in practice. When someone talks to the NPC, the Chat module prepares the conversation context like this:
AROUND-ME:1,dc5904e0-de29-4dd4-b126-e969d85d1f82,owner:Darin Murphy,2.129770m,in front of me,level; following,avatars=1,OBJECTS=Left End of White Couch (The left end of a elegant White Couch adorn with a soft red pillow with goldn swirls printed on it.) [scripted, to my left, 1.6m, size:1.4×1.3×1.3m], White Couch Mid-section (The middle section of a elegant white couch.) [scripted, in front of me to my left, 1.8m, size:1.0×1.3×1.0m], Small lit candle (A small flame adornes this little fat candle) [scripted, front-right, 2.0m, size:0.1×0.2×0.1m], Rotating Carousel (Beautiful little hand carved horse of various colored saddles and manes ride endlessly around in this beautiful carouel) [scripted, front-right, 2.4m, size:0.3×0.3×0.3m], Coffee Table 1 ((No Description)) [furniture, front-right, 2.5m, size:2.3×0.6×1.2m], White Couch Mid-section (The middle section of a elegant white couch.) [scripted, in front of me to my left, 2.6m, size:1.0×1.3×1.0m], Small lit candle (A small flame adornes this little fat candle) [scripted, front-right, 2.9m, size:0.1×0.2×0.1m], Right End of White Couch (The right end of a elegant white couch adored with fluffy soft pillows) [scripted, in front of me, 3.4m, size:1.4×1.2×1.6m], Executive Table Lamp (touch) (Beautiful Silver base adorn with a medium size red this Table Lamp is dark yellow lamp shade.) [scripted, to my right, 4.1m, size:0.6×1.0×0.6m], Executive End Table (Small dark wood end table) [furniture, to my right, 4.1m, size:0.8×0.8×0.9m]\nUser
This information travels with every message to the AI model. When the NPC responds, it can say things like “I see you standing by the blue chair” or “Sarah’s been nearby.” The responses stay grounded in reality.
This solved a critical problem I’ve seen with AI-driven NPCs: hallucination. Language models will happily describe mountains that don’t exist, furniture that isn’t there, or entire landscapes they’ve invented. B
y explicitly telling the AI what’s actually present in the environment, responses stay rooted in what visitors actually see.
The architecture: six scripts, one system
Rather than building a monolithic script, I designed the framework as modular components.
Main.lsl creates the NPC and orchestrates communication between modules. It’s the nervous system connecting all the parts.
Chat.lsl handles AI integration. This is where the magic happens—it combines user messages with sensory data, sends everything to an AI model (local or cloud), and interprets responses. The framework supports KoboldAI for local deployments, plus OpenAI, OpenRouter, Anthropic, and HuggingFace for cloud-based options. Switching between providers requires only changing a configuration file.
Senses.lsl provides that environmental awareness I mentioned—continuously scanning and reporting on what’s nearby.
Actions.lsl manages movement: following avatars, sitting on furniture, and navigating. It includes velocity prediction so NPCs don’t constantly chase behind moving targets. It also includes universal seating awareness to prevent awkward moments where two NPCs try to sit in the same chair.
Pathfinding.lsl implements A* navigation with real-time obstacle avoidance. Instead of pre-baked navigation meshes, the NPC maps its environment dynamically. It distinguishes walls from furniture through keyword analysis and dimensional measurements. It detects doorways by casting rays in multiple directions. It even tries to find alternate routes around obstacles.
Gestures.lsl triggers animations based on AI output. When the AI model outputs markers like %smile% or %wave%, this module plays the corresponding animations at appropriate times.
All six scripts communicate through a coordinated timer system with staggered cycles. This prevents timer collisions and distributes computational load. Each module has a clearly defined role and speaks a common language through link messages.
Intelligent movement that actually works
Getting NPCs to navigate naturally proved more complex than I expected.
The naive approach—just call llMoveToTarget() and point at the destination—results in NPCs getting stuck, walking through walls, or oscillating helplessly when blocked. Real navigation requires actual pathfinding.
The Pathfinding module implements A* search, which is standard in game development but relatively rare in OpenSim scripts. It’s computationally expensive, so I’ve had to optimize carefully for LSL’s constraints.
What makes it work is dynamic obstacle detection. Instead of pre-calculated navigation meshes, the Senses module continuously feeds the Pathfinding module with current object positions. If someone moves furniture, paths automatically recalculate. If a door opens or closes, the system adapts.
One specific challenge was wall versus furniture classification. The system needs to distinguish between “this is a wall I can’t pass through” and “this is a chair I might want to sit in.” I solved this through a multi-layered approach: keyword analysis (checking object names and descriptions), dimensional analysis (measuring aspect ratios), and type-based classification.
This matters because misclassification causes bizarre behavior. An NPC trying to walk through a cabinet or sit on a wall looks broken, not intelligent.
The pathfinding also detects portals—open doorways between rooms. By casting rays in 16 directions at multiple distances and measuring gap widths, the system finds openings and verifies they’re actually passable (an NPC needs more than 0.5 meters to fit through).
Making gestures matter
An NPC that stands perfectly still while talking feels robotic. Real communication involves body language.
(Image courtesy Darin Murphy.)
I implemented a gesture system where the AI model learns to output special markers: %smile%, %wave%, %nod_head%, and compound gestures like %nod_head_smile%. The Chat module detects these markers, strips them from visible text, and sends gesture triggers to the Gestures module.
Output: %smile% Thank you for your compliment! It’s always wonderful to hear positive feedback from our guests.
The configuration philosophy
One principle guided my entire design: non-programmers should be able to customize NPC behavior.
The framework uses configuration files instead of hard-coded values. A general.cfg file contains over 100 parameters—timer settings, AI provider configurations, sensor ranges, pathfinding parameters, and movement speeds. All documented, with sensible defaults.
A personality.cfg file lets you define the NPC’s character. This is essentially a system prompt that shapes how the AI responds. You can create a friendly shopkeeper, a stern gatekeeper, a scholarly librarian, or a cheerful tour guide. The personality file also specifies rules about gesture usage, conversation boundaries, and sensing constraints.
A third configuration file, seating.cfg, lets content creators assign priority scores to different furniture. Prefer NPCs to sit on benches over chairs? Configure it. Want them to avoid bar stools? Add a rule. This lets non-technical builders shape NPC behavior without touching code.
(Image courtesy Darin Murphy.)
Why this matters
Here’s what struck me while building this: OpenSimulator has always positioned itself as the budget alternative to commercial virtual worlds. Lower cost, more control, more freedom. But that positioning came with a tradeoff. It has fewer features, less polish, and less sense of life.
Intelligent NPCs change that equation. Suddenly, an OpenSim grid can offer something that commercial platforms struggle with, which is NPCs built and customized by the community itself, shaped to fit specific use cases, deeply integrated with regional storytelling and design.
An educational institution could create teaching assistants that actually answer student questions contextually. A roleplay community could populate its world with quest givers that adapt to player choices. A commercial grid could deploy NPCs that provide customer service or guidance.
The technical challenges are real. LSL has a 64KB memory limit per script, so careful optimization is necessary. Scaling multiple NPCs requires load distribution. But the core concept works.
Current state and what’s next
I built this framework to answer a fundamental question: can we create intelligent NPCs at scale in OpenSimulator? The answer appears to be yes, at least for single NPCs and small groups.
The framework is production-ready for single-NPC deployments in various scenarios. I’m currently testing it with multiple NPCs to identify scaling optimizations and measure actual performance under load.
Some features I’m considering for future development:
Conversation memory – Storing interaction history so NPCs remember previous encounters with specific avatars Multi-NPC coordination – Allowing NPCs to be aware of each other and coordinate complex behaviors Voice synthesis – Giving NPCs actual spoken voices instead of just text Mood modeling – Tracking NPC emotional states that influence responses and behaviors Learning from interaction – Using feedback to improve navigation and social responses over time
But the most exciting possibilities come from the community.
What happens when educators deploy NPCs for interactive learning? When artists create installations featuring characters with distinct personalities? When builders integrate them into complex, evolving storylines?
Testing and real-world feedback
I’m actively looking to understand whether there’s genuine interest in this framework within the OpenSim community. The space is admittedly niche — virtual worlds are no longer a mainstream media topic — but within that niche, intelligent NPCs could be genuinely transformative.
I’m particularly interested in connecting with grid owners and educators who might want to test this. Real-world feedback on performance, use cases, and technical challenges would be invaluable.
How do NPCs perform with multiple simultaneous conversations? What happens with dozens of visitors interacting with an NPC at once? Are there specific behaviors or interactions that developers actually want?
This information would help me understand what features matter most and where optimization should focus.
The bigger picture
Building this framework gave me a perspective shift. Virtual worlds are often discussed in terms of their technical capabilities, such as avatar counts, region performance, and rendering fidelity. But what actually makes a world feel alive is the presence of intelligent inhabitants.
Second Life succeeded partly because bots and NPCs added texture to the experience, even when simple. OpenSimulator has never fully capitalized on this potential. The tools have always been there, but the technical barrier has been high.
If that barrier can be lowered, if grid owners can deploy intelligent, contextually-aware NPCs without becoming expert scripters, it opens possibilities for more immersive, responsive virtual spaces.
The question isn’t whether we can build intelligent NPCs technically. We can. The question is whether there’s enough community interest to make it worthwhile to continue developing, optimizing, and extending this particular framework.
I built it because I had to know the answer. Now I’m curious what others think.
The AI-Driven NPC Framework for OpenSimulator is currently in active development and I’m exploring licensing models and seeking genuine community and educational interest to inform ongoing development priorities. If you’re a grid owner, educator, or developer interested in intelligent NPCs for virtual worlds, contact me at [email protected] about your specific use cases and requirements.
Darin Murphy has been working in the computer field all his life. His first experience with chatbots was ELIZA, and, since then, he’s tried out many others — and, most recently ChatGPT. He enjoys OpenSim, exploring AI, and playing games.
Firedancer is now live on the Solana Mainnet after three years of development.
It enhances network resilience by providing validator client diversity.
The launch positions Solana for future major speed and scale upgrades.
The Solana validator client, Firedancer, launched on Solana Mainnet on Friday. Firedancer is a result of three years of development and was developed by Jump Trading, a prominent crypto investment and trading firm. Before its launch, Firedancer had been running on a set of validators for 100 days, with more than 50,000 blocks created.
The news of the entire client release came from Solana’s X post, marking the completion of its exit from beta status. Firedancer, created from scratch in C and C++, represents an independent solution alternative for the prevailing validator client.
BREAKING: After 3 years of development, Firedancer is now live on Solana Mainnet, and has been running on a handful of validators for 100 days, successfully producing 50,000 blocks 🔥💃 pic.twitter.com/Y0WxxEj2WL
— Solana (@solana) December 12, 2025
Firedancer client’s structure allows it to process large volumes of transactions, potentially hitting 1 million TPS, and eventually enabling the execution of demanding DApps on the blockchain. It has successfully operated on a couple of validators for 100 days and created 50,000 blocks, marking its first step as it contributes somewhat to tackling the historical dependence on a single code set that paused operation on the blockchain.
Firedancer’s developmental history
The development of Firedancer began in 2022 with Jump Crypto, primarily to address the issue of network stability created because of a lack of diversity among the clients.
The fact that it relied on a single main validator codebase meant that it might be endangered due to a single software defect. It began incorporating a hybrid form known as “Frankendancer” months before it fully launched the mainnet. Frankendancer included Firedancer’s high-performance networking stack and execution layer components.
To validate their new code and make it more stable, the Firedancer team also readied itself for a big ‘bug bounty launch’ in 2024, offering prizes to coders who could spot weaknesses in its infrastructure.
Unlocking new performance ceilings
Firedancer’s introduction releases the full potential for faster transaction times, as it has been shown that it is capable of handling more than one million transactions per second. But it is more closely tied to necessary changes within Solana’s core structure. Specifically, Jump’s Firedancer team has been promoting an adjustment to Solana’s current block computation hard limit (SIMD-0370).
Firedancer’s capabilities will make it easier for more complex solutions to be implemented on Solana. As more people use the standalone client, it will make Solana more decentralized and resistant to problems with single points of failure.
Also Read: Bhutan Launches First State-Backed Gold Token on Solana
Bitnomial received CFTC approval to clear fully collateralized swaps to launch prediction markets.
The company is now the only U.S. exchange offering unified trading for prediction markets and derivatives.
It will also offer clearing services to its own platform and outside prediction market partners.
Bitnomial Clearinghouse, LLC, a subsidiary of U.S. derivatives exchange company Bitnomial, announced on Friday that it received approval from the U.S. Commodity Futures Trading Commission (CFTC) to clear fully collateralized swaps. The approval enables the firm to launch prediction markets for clients and partners.
This makes Bitnomial the sole U.S. full-service exchange and clearinghouse offering a full suite of products, including prediction markets, under a single regulatory structure and unified liquidity mechanism. It allows for new types of hedging and risk management related to economic and crypto outcomes.
Bitnomial, which owns and operates CFTC-regulated exchange (DCM), clearinghouse (DCO), and clearing brokerage (FCM) subsidiaries, will now be jumping into the prediction markets narrative and standing among competitors like Kalshi and Polymarket.
New prediction market focus
As per the official release, this regulatory approval immediately allows Bitnomial to offer prediction markets focusing on a range of outcomes, including cryptocurrency price movements and macroeconomic indicators, which will integrate with its current product offerings. Furthermore, the clearinghouse intends to offer its services to partner prediction market platforms, expanding the market’s accessibility.
The company has established a regulatory foundation within the U.S. derivatives landscape, offering U.S. perpetuals, physical futures, and options on various assets within its Bitcoin Complex and Crypto Complex product lines.
Unique crypto margin
Bitnomial’s previous distinction was its unique capability as the only U.S. clearinghouse providing crypto margin collateral and settlement for approved products and assets. This infrastructure allows participants to use digital assets for both posting collateral and settling trades.
The firm will incorporate its new prediction market products with this expansion, letting users get specific exposure to different event outcomes and accurately offset risk within the entire product suite.
Clearing services for external partners
As a neutral clearing provider, Bitnomial will focus on infrastructure, providing external partners with access to collateral mobility across both USD and crypto, using the same margin and settlement technology that underpins Bitnomial’s existing derivatives offerings.
Michael Dunn, President of Bitnomial Exchange and Clearinghouse, commented on the development, stating, “Prediction markets represent the next frontier for regulated derivatives, and no other U.S. venue offers this combination of products with unified trading, clearing, and margin.”
He added that the DCO approval is crucial because it “allows us to serve both our own exchange and external partners, building a clearing network that strengthens the entire prediction market ecosystem.”
Development in the prediction market
Earlier this week, Crypto exchange Gemini secured a Designated Contract Market (DCM) license from the CFTC to offer U.S.-regulated prediction markets. This approval for Gemini Titan, LLC, will initially launch binary event contracts.
It also puts the Winklevoss-led platform in direct competition with Bitnomial and others, confirming that the regulatory landscape for prediction markets in the United States is rapidly opening and becoming a new frontier for regulated financial products.
Also Read: Polymarket Odds for Bitcoin Outperforming Gold in 2025 Plunge to 1%
Florida seized $1.5 million in cryptocurrency linked to a Chinese scammer.
The victim reportedly lost $47,421 to an online investment scam in Citrus County.
The suspect, Tu Weizhi, is charged with money laundering, grand theft, and fraud.
Prosecutors in Florida have reportedly seized about $1.5 million in cryptocurrency after tracking funds from a July 2024 Citrus County investment scam to a wallet tied to a Chinese national named Tu Weizhi.
The seizure happened on Thursday after investigators followed a payment trail that began when a local resident reported losing $47,421 to what appeared to be an online investment opportunity.
According to the release, officials moved to trace the money using a court order to secure the entire wallet connected to Tu. Prosecutors said the goal was not only to recover the victim’s money but also to stop the wider flow of funds linked to the alleged scheme.
Attorney General James Uthmeier said the effort was led by the Office of Statewide Prosecution’s Cyber Fraud Enforcement Unit. He noted the work of investigators, stating, “While scammers are changing their methods, I am proud of our Statewide Prosecutors’ ability to adapt and deliver justice.”
He also thanked the Citrus County Sheriff’s Office for its part in the case and noted that this effort helped make the victim whole again. According to the Attorney General’s office, the seized wallet held AVAX, Dogecoin, Pepe, and Solana tokens, with a combined value of about $1.5 million.
Charges against the suspect
Tu, who is believed to be in China, has been charged with money laundering, grand theft, and operating an organized scheme to defraud. Florida authorities said he will be arrested if he enters the United States.
The state carried out the seizure under its fugitive disentitlement rules, which allow courts to act against assets tied to criminal cases when the accused stays outside the U.S. jurisdiction. This process limits a suspect’s ability to use Florida courts unless they appear to face charges.
The case highlights how large the fraud landscape has become. In a report shared earlier this year, the Federal Trade Commission recorded more than $12 billion in fraud losses in 2024, and investment scams made up about $5.7 billion of that total.
Also Read: Hacker Exploits Binance Co-CEO’s WeChat to Pump Mubarakah Token
Published: December 12, 2025 at 9:14 am Updated: December 12, 2025 at 9:14 am
by Ana
Edited and fact-checked:
December 12, 2025 at 9:14 am
To improve your local-language experience, sometimes we employ an auto-translation plugin. Please note auto-translation may not be accurate, so read original article for precise information.
In Brief
Google just released an upgraded version of its Deep Research agent, now available to developers through a new Interactions API — with consumer rollouts coming soon to Search, NotebookLM, and the Gemini app.
Technology company Google stated that it has released a substantially upgraded version of its Deep Research agent, now accessible to developers through a new Interactions API, with consumer availability planned for Search, NotebookLM, and the Gemini application.
For the first time, developers are able to integrate Google’s most advanced autonomous research capabilities directly into their own applications. Gemini Deep Research is designed for extended information-gathering and synthesis tasks, and its reasoning system is powered by Gemini 3 Pro, described as the company’s most factual model to date. It has been trained to reduce hallucinations and enhance the clarity and reliability of complex reports. By expanding multi-step reinforcement learning for search, the agent can independently navigate intricate information environments with improved accuracy.
The agent constructs its research workflow step by step by generating queries, reviewing results, identifying missing information, and continuing the process until it completes its investigation. The new release includes major upgrades to web search performance, enabling deeper navigation into websites to extract highly specific data.
According to Google, the latest version delivers state-of-the-art performance on Humanity’s Last Exam (HLE) and DeepSearchQA, while also achieving its strongest results to date on BrowseComp. It is optimized for producing well-researched reports at significantly lower cost and will soon be integrated into Google Search, NotebookLM, Google Finance, and an enhanced version of the Gemini application.
Early testing already shows substantial gains across fields where accuracy and detailed contextual understanding are essential. In financial services, firms have begun using Gemini Deep Research to streamline the early phases of due diligence by aggregating market indicators, competitor insights, and compliance considerations from both public and proprietary sources. This has made the agent a valuable tool for investment teams conducting preliminary workflows.
Within the scientific sector, the agent is being applied to complex safety-related research. Axiom Bio, a company developing AI systems for predicting drug toxicity, reported that Gemini Deep Research provided a depth of initial analysis and precision across biomedical literature that allowed its research and discovery processes to progress more rapidly.
For developers building automated research systems, the Gemini Deep Research agent offers broad functionality for synthesizing information and producing detailed, verifiable reports. It supports unified analysis of user documents such as PDFs, CSVs, and text files alongside public web sources by combining File Upload with the File Search Tool.
It manages extensive context effectively, enabling developers to include large amounts of background material directly in the prompt. Output structure can be shaped through prompting, allowing full control over report layout, headings, and data presentation. The system provides granular citations for claims, ensuring transparency regarding data provenance, and supports structured outputs, including JSON schemas, for streamlined integration into downstream applications.
Introducing the Gemini Deep Research agent for developers.
It can create a plan, spot gaps, and autonomously navigate the web to produce detailed reports. 🧵 pic.twitter.com/L6dBjYg8Yv
— Google DeepMind (@GoogleDeepMind) December 11, 2025
Google Open-Sources DeepSearchQA Benchmark To Advance Multi-Step Web Research Capabilities
Additionally, Google announced the open-sourcing of a new benchmark called DeepSearchQA, created to evaluate how effectively research agents handle comprehensive, multi-step web-based inquiry. DeepSearchQA includes 900 manually constructed causal-chain tasks spanning 17 subject areas, with each step building on the conclusions of the previous one. Rather than relying on simple fact-retrieval questions, the benchmark measures an agent’s ability to produce complete and exhaustive answer sets, enabling assessment of both research accuracy and retrieval coverage.
DeepSearchQA is also intended as a diagnostic resource for studying the effects of extended reasoning time. Internal testing has shown that performance improves when agents are given more opportunities to run additional searches and reasoning cycles, an area Google expects to expand on in future iterations.
The benchmark materials are being released to encourage continued progress toward more capable research agents. Developers and researchers can review the dataset, leaderboard, and starter Colab, as well as examine the underlying methodology described in the accompanying technical report.
Although the Deep Research landscape is already highly competitive, Google’s updated agent introduces notable enhancements that build on the capabilities of the existing Gemini 3 models. The release also marks the first time developers can integrate this technology directly into their own applications, offering a significant improvement to the research functionality within third-party products.
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.
More articles
Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.
Cryptocurrency mining without hardware is rising in 2025 as investors, both large and small, shift towards sustainable passive income. Cloud mining without complex...