Metaverse

Home Metaverse Page 138

Lamborghini to Launch Digital Temerario GT3 in Wilder World Metaverse | NFT News Today

Lamborghini to Launch Digital Temerario GT3 in Wilder World Metaverse | NFT News Today


Automobili Lamborghini is bringing its latest models into the virtual world through a new partnership with metaverse platform Wilder World. The launch includes the digital debut of the Temerario and Temerario GT3, as part of a broader push to grow the brand’s digital ecosystem, Fast ForWorld.

Key Takeaways:

Lamborghini’s new Temerario and GT3 models will be available in the open-world metaverse Wilder World.

The project marks the debut of Lamborghini’s digital platform expansion, Fast ForWorld.

Limited digital vehicles—590 streetcars and 10 GT3 racecars—drop July 11 for $300 each.

Cars will be usable across Wilder World, REVV Racing, Motorverse, and other Web3 titles.

A fully interactive Lamborghini showroom will launch inside Wilder World’s digital city.

Lamborghini Goes Virtual — With Style

Lamborghini is stepping into the metaverse with its signature flair, teaming up with Wilder World to bring two of its newest models—the Temerario and Temerario GT3—into a high-fidelity digital world built on Unreal Engine 5.

More than a one-off drop, the collaboration launches Lamborghini’s Fast ForWorld expansion into immersive digital culture, gaming, and collectibles.

Developed in partnership with Wilder World, Animoca Brands, Motorverse, and Gravitaslabs, the project blends next-gen graphics with high-performance car culture inside a living, player-driven metaverse backed by Samsung, Epic Games, NVIDIA, and Polygon.

Source: Wilder World

Drop Lands July 11

The digital cars launch July 11, alongside the real-world GT3 debut at the 2025 Goodwood Festival of Speed. The drop includes 590 Temerario streetcars and only 10 ultra-rare GT3 racecars—each priced at $300.

The mint will be available via the Wilder World Marketplace on OpenSea and Lamborghini’s Fast ForWorld Marketplace.

Built to Move Between Worlds

The cars are designed as Universal Digital Assets (UDAs) developed by Animoca Brands’ Motorverse team. UDAs are built for cross-platform interoperability, allowing owners to use them in Wilder World, REVV Racing, Motorverse, and other Web3-compatible games.

So, unlike static NFTs, these are playable assets built to function across multiple Web3 platforms. The Temerario GT3 follows Lamborghini’s earlier foray into interoperable digital vehicles with the Motorverse-developed Revuelto, the world’s first cross-platform digital car.

Inside the Digital Garage

To support the launch, Lamborghini is opening a dedicated showroom in Wilder World’s immersive city. Players can browse cars, explore the brand’s legacy, and connect with other fans.

“With Fast ForWorld, we’re exploring new ways for people to experience Automobili Lamborghini beyond the road and beyond the physical world,” said Christian Mastro, Marketing Director at Automobili Lamborghini.

The showroom will also host the Lamborghini Digital Vehicle Library—a growing archive of virtual models, collectibles, apparel, and heritage assets.

More Than a One-Time Launch

The Wilder World partnership is designed for the long term. Lamborghini and its collaborators plan future digital vehicle drops, integrations with other platforms, and branded content that extends the Fast ForWorld experience.

Wilder World is a massive, photorealistic metaverse where players can race, battle, complete missions, and attend social events—all within a fully on-chain player economy. For Lamborghini, it’s a chance to embed its digital lineup into a metaverse that prioritizes use, not just ownership.

“We’re thrilled to partner with Automobili Lamborghini to evolve Fast ForWorld into a multi-dimensional experience that goes far beyond collectibles,” said n3o, co-founder of Wilder World. “Together, we’re crafting a new frontier for multiple immersive brand experiences to be connected across the metaverse.”



Source link

Don’t Trust Your Eyes: How Deepfakes Are Redefining Crypto Scams

Don’t Trust Your Eyes: How Deepfakes Are Redefining Crypto Scams


In Brief

Generative AI-driven deepfake scams surged 456% from 2024 to 2025, fueling sophisticated cryptocurrency fraud that exploits trust with near-perfect fake texts, videos, and voices, making detection and prevention increasingly critical.

Don’t Trust Your Eyes: How Deepfakes Are Redefining Crypto Scams

According to TRM Labs’ Chainabuse platform, incidents involving generative artificial intelligence (genAI) tools rose by a staggering 456% between May 2024 and April 2025, compared to the previous year—which had already experienced a 78% jump from 2022-23. These statistics point to a dramatic shift in how bad actors exploit cutting-edge technology to commit fraud.

GenAI tools can now create near-perfect human text, visuals, audio, and even live video. Scammers are leveraging this capability at scale, producing everything from deepfake celebrity endorsements to AI-generated phishing calls. In this feature, we dive into the major trends, methods, and real-world cases shaping the alarming intersection of AI deepfakes and cryptocurrency fraud.

Deepfakes Accounted for 40% of Crypto Scams in 2024

In 2024, deepfake technology was responsible for 40% of all high-value crypto frauds, according to a Bitget report co-authored with Slowmist and Elliptic. That same year, the crypto industry saw $4.6 billion vanish to scams—a 24% increase from the prior year.

Bitget’s report described this new landscape as one where “scams exploit trust and psychology as much as they do technology.” The findings suggest that social engineering, AI deception, and fake project fronts have collectively ushered crypto fraud into an entirely new era.

The Elon Musk

One recurring deepfake tactic involved impersonations of high-profile figures, such as Elon Musk. Scammers used realistic videos of Musk to pitch fraudulent investments or fake giveaways. These visuals were convincing enough to fool seasoned investors and regular users alike.

Deepfakes can be used to evade know-your-customer (KYC) protocols, impersonate leadership in scam projects, and the manipulation of Zoom meetings. Some scammers impersonate journalists or executives to lure victims into video calls and obtain sensitive information like passwords or crypto keys.

Old Scams, New Faces

While the Elon Musk deepfake scam first gained notoriety in 2022, its evolution is indicative of a broader trend: AI now makes familiar frauds harder to spot. Even government figures have taken notice. In March 2025, the U.S. passed the bipartisan Take It Down Act to protect victims of deepfake pornography—a milestone in AI policy, though deepfakes used in scams remain largely unregulated.

The prevalence of AI deepfakes extends far beyond American borders. In October 2024, Hong Kong authorities shut down a deepfake-driven romance scam that had conned victims into investing in fraudulent crypto schemes. AI-generated avatars created fake emotional bonds with victims before luring them into high-risk “investment” opportunities.

AI is also enabling a surge in disinformation across social platforms. Bots armed with genAI technology flood timelines with fake product endorsements and coordinated narratives around specific tokens. These bots, designed to sound like real people or influencers, create a sense of credibility and urgency, pushing unsuspecting users into scam tokens or pump-and-dump schemes.

The rise of AI-powered customer support scams adds another layer. Sophisticated AI chatbots now pose as support agents from legitimate crypto exchanges or wallets. Their conversations are eerily human, tricking users into giving up sensitive details like private keys or login credentials.

In May 2025, actor Jamie Lee Curtis publicly criticized Meta CEO Mark Zuckerberg after discovering a deepfake ad featuring her likeness used to promote an unauthorized product. The incident underscored how easily AI can exploit public trust and manipulate reputations.

Bitget CEO Gracy Chen summed it up aptly: “The biggest threat to crypto today isn’t volatility—it’s deception.”

Second and Third Most Dangerous: Social Engineering and Ponzi Scams

While deepfakes took the top spot in Bitget’s list of threats, social engineering and digital Ponzi schemes weren’t far behind.

Social engineering, described as “low-tech but highly effective,” relies on psychological manipulation. One common scam, the pig butcher scheme, involves scammers forming relationships—often romantic—to build trust before stealing funds.

Meanwhile, traditional Ponzi scams have undergone a “digital evolution.” They’re now cloaked in trendy concepts like DeFi, NFTs, and GameFi. Victims are promised lucrative returns through liquidity mining or staking platforms, but these setups are fundamentally unchanged: “new money fills old holes.”

Some Ponzi schemes have even gamified their platforms, creating engaging user interfaces and using deepfakes to mimic celebrity endorsements. Messaging apps and livestreams are used to propagate these scams, encouraging participants to recruit new victims—a tactic Bitget calls “social fission.”

“Don’t Trust Your Eyes”

Bitget’s report captured the unsettling shift: five years ago, fraud prevention meant avoiding suspicious links. Today, the advice is: “don’t trust your own eyes.”

AI tools are becoming extraordinarily powerful, and as a result, the distinction between real and fake is becoming less defined. This is a clear challenge for consumers, and regulators, who are now facing an opponent with the ability to fabricate complete identities and stories in an unfathomably short time period with a high degree of accuracy.

Despite these challenges, Bitget’s Chen remains optimistic. She emphasized that the crypto space isn’t helpless: “We’re seeing a lot of work being done on deepfake detection, and the industry is collaborating more than ever to share intelligence and spread awareness.”

How to Spot AI-Powered Crypto Scams

In contrast to past scams that often featured spelling or grammatical errors, AI driven fraud is polished, personalized, and mostly free of typos or broken links. Recognizing these scams will require a more sophisticated approach:

Tone Matching: AI-produced messages can now replicate the language, tone, and cadence of actual influencers or executives, making them nearly indistinguishable from true communications.

Video Tells: In deepfake videos, look for small inconsistencies like poor lip-syncing or unnatural blinking, especially during rapid movement.

Audio Cues: Be wary of voice deepfakes that have odd pauses or tonal mismatches, as they can betray their artificiality.

Cross-Verification: As with all financial endorsements, do not take them at face value. Validate them through verified sources, like the official social media profiles or websites of the individual or brand.

How to Stay Safe in an AI-Driven Threat Landscape

Surviving in this new world requires more than skepticism—it calls for active vigilance and layered security practices:

Stay Informed: Know what scammers are doing and how AI tools can be manipulated. Awareness is still the best initial protection.

Verify Everything: Regardless of unsolicited financial advice or endorsements, be suspicious. Verify everything through the true source.

Use Detection Tools: Employ deepfake detection technologies that can flag manipulated audio or video. Look for glitches in speech patterns or facial expressions.

Secure Your Wallet: Use 2FA and don’t share keys or logins, even with what allegedly is “customer support.”

Leverage Blockchain Tools: Security companies are developing AI-assisted platforms that monitor scam trends across blockchain transactions. Using fraudulent parameters allows for potential identification of a scam before it succeeds.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author


Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

More articles


Alisa Davidson










Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.








More articles



Source link

Crypto Weekly (Mid-July 2025): Market Pauses, Whales Stir, Toncoin Trips Over A Golden Visa

Crypto Weekly (Mid-July 2025): Market Pauses, Whales Stir, Toncoin Trips Over A Golden Visa


In Brief

Markets showed signs of cautious optimism this week, with Bitcoin and Ethereum stuck in consolidation despite positive developments, while Toncoin’s rally was cut short by controversy over a false UAE visa claim.

Crypto Weekly (Mid-July 2025): Market Pauses, Whales Stir, Toncoin Trips Over A Golden Visa

Hey traders – another week gone, and here’s what stood out to us at MPOST. It’s been one of those weeks where the market feels both alert and oddly indifferent. Prices didn’t explode, but they didn’t unravel either. There’s motion, but not momentum – the market’s first leaning forward, then second-guessing itself. You look at the charts and you see the potential. Then you look at the news and, depending on the headline, you’re either cautiously intrigued or rolling your eyes.

We’ve been watching it closely, and the feeling we keep coming back to is “pre-breakout jitters”. You’ve got Bitcoin circling the same key level for the third time and never breaking those. Ethereum quietly repositioning itself, as if preparing for something that hasn’t been named yet. And Toncoin gave us a bit of a drama to put it mildly. 

So let’s walk through what’s been happening – and what it might be setting up.

Bitcoin (BTC)

Bitcoin’s spent the better part of the week meandering in a fairly tight range between $105K and $110K – teasing a breakout first, then rejecting it, then trying again, pulling baсk – you get the idea. It’s that annoying kind of price action that’s neither bullish enough to chase, nor bearish enough to fear. As of now, we’re hovering around $109K, which is not nothing, but not exciting either.

ETH/USDT 4H Chart, Coinbase. Source: TradingView

So what’s keeping things in this limbo state? Well, for starters, the U.S. jobs report on July 3 came in hotter than expected – which pretty much shoved any dreams of a near-term Fed rate cut right off the table. Rate-sensitive markets (and yes, that includes crypto whether we like it or not) didn’t love that. As a result, risk-on appetite cooled, bond yields crept up, and Bitcoin, despite its big new ETF following, just kind of froze. 

Then there’s the weird whale wallet saga. Out of the blue, a 14-year-dormant address holding 80,000 BTC ($8.6B!) lit up, and the internet briefly lost its mind. The community naturally rumored Satoshi’s return. Turns out, it’s probably not him (again), but moments like this usually spook the market more than excite it.

On the other hand, the macro narrative isn’t entirely bleak. Trump’s “Big Beautiful Bill” passed without the feared anti-crypto tax bits, and analysts think the massive debt expansion baked into it could end up being Bitcoin’s best friend long-term. 

source:%20Cynthia%20Lummis

We’re also seeing the quiet continuation of ETF inflows (15-day streak broken, but momentum’s still alive), plus Michael Saylor’s Strategy scooped up another half-billion worth of BTC. So it’s not like nobody’s buying.

Spot Bitcoin ETF net flows, US$. Source: CoinGlass

Where this leaves us? Pretty much where we started. Still rangebound, still waiting. But hey – the more time spent flirting with $110K without falling apart, the better the odds of a proper breakout later. Unless, of course, some whale decides to sneeze and dump 10,000 BTC on us tomorrow.

Ethereum (ETH)

If you look at Ether’s price action closely, you’ll notice that it was pretty much the spitting image of Bitcoin’s. All we got this past week we had a slow, deliberate climb back above the 50 SMA on the 4-hour chart. As we speak, price is drifting near $2,570, which isn’t fireworks-worthy, but definitely a marked improvement over last week’s shaky $2,400 zone. RSI is perking up too, heading toward 60. It’s not quite overbought, but it’s got that look of something warming up.

BTC/USDT 4H Chart, Coinbase. Source: TradingView

Narratively, ETH had a bit of a moment thanks to Vitalik, who proposed a new gas cap (EIP-7983) that could help stabilize the network under load. Wonky, sure, but also pretty important for Ethereum’s long-term viability, especially if the ecosystem’s serious about scaling with zero-knowledge rollups and onboarding institutions.

source:%20Cointelegraph

And speaking of institutions – BitMine just threw $250 million into an ETH treasury. That’s a pretty loud signal from a company that used to be Bitcoin-only. 

source:%20PR%20Newswire

There’s also increasing chatter around ETH spot ETF approval odds. If Bitcoin’s already got the green light and is outperforming legacy ETFs in revenue (yes, BlackRock’s BTC fund is now bigger than its S&P 500 fund in yield), why not Ether next?

source:%20James%20Seyffart

But again, ETH’s not moving in a vacuum. It still dances to Bitcoin’s rhythm. As long as BTC stays stuck in consolidation mode, ETH probably does too – just with slightly more grace and fewer sharp edges.

Summing up, if Bitcoin finally breaks above $110K, ETH could punch through $2,700 and not look back. If not, it’s more of the same – slow grind, low volatility, and watching gas fees like a hawk. 

Toncoin (TON)

For ton, the week started with a bang and ended with a facepalm. After slowly building momentum into early July, Toncoin launched itself toward $3.05 – which looked like a legitimate breakout. But then came the whole “golden visa” drama.

In short: The TON Foundation announced a new partnership with the UAE, claiming that anyone staking $100K worth of TON for three years would become eligible for a shiny 10-year golden visa. For about 12 hours, the market loved it. And then the UAE government flatly denied the whole thing. Said it was completely false. Cue instant selloff, and TON tumbled right back under $2.85.

source:%20Sanjay

Technically, the chart now looks more confused than decisive. Price is sitting below the 50 SMA, RSI has dropped to 47-ish, and that impulsive breakout candle now just looks like the kind of thing people wish they hadn’t aped into.

TON/USDT 4H Chart, Coinbase. Source: TradingView

That said – it’s not all doom and gloom. Under the hood, TON continues to evolve. A major core update slashed transaction finality from 30 seconds to under five. 

source:%20Telegram

Telegram dropped new NFT monetization features for channel owners, collectibles integration, and a few whispers about upcoming partnerships that could be pretty impactful. If this were any other chain, the tech story would probably be the main headline – but in TON’s case, the UAE frictions are kinda stealing the spotlight right now.

And yes, it’s still shadowing Bitcoin and ETH. There’s no denying it: TON’s flow of capital, sentiment and even volatility is downstream from the majors. So, our short-term view is: if TON claws back above $2.95 on solid volume, that would undo a lot of damage. Otherwise, expect some messy consolidation while traders decide whether to forgive the whole UAE embarrassment or not.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author


Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

More articles


Alisa Davidson










Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.








More articles





Source link

Canary Capital Proposes Pudgy Penguins ETF | NFT News Today

Canary Capital Proposes Pudgy Penguins ETF | NFT News Today


The crypto market is always introducing new ideas and the Pudgy Penguins ETF by Canary Capital is one of them. This new ETF seeks to hold Pudgy Penguins NFTs, PENGU token and other digital assets like Solana (SOL) and Ethereum (ETH). If approved by the U.S. Securities and Exchange Commission (SEC), it would mark the first instance in the United States of an ETF directly including NFTs. Below, we’ll explore how this ETF is structured, why it’s noteworthy, and what the broader implications could be.

What Is the Proposed Pudgy Penguins ETF?

Canary Capital has filed an S-1 registration statement with the SEC, proposing an ETF that directly holds Pudgy Penguins NFTs alongside the PENGU token—the governance token for the Pudgy Penguins community. The fund will also reserve Solana and Ethereum to facilitate transactions involving PENGU and the NFTs.

Key Assets in the ETF

PENGU Token: Governance token of the Pudgy Penguins ecosystem to vote on community decisions.

Pudgy Penguins NFTs: A popular collection launched on Ethereum in July 2021, known for its cute penguin art and active community.

Solana (SOL) & Ethereum (ETH): For transactions and minting operations related to NFTs and governance token.

Market Response and Regulatory Hurdles

The announcement caused an increase in the PENGU token price which is now around $438 million in market cap. The floor price of Pudgy Penguins NFTs also went up as investors are excited about a regulated product combining NFTs with cryptocurrencies. However, there is no guarantee of fast SEC approval. Meme tokens and NFTs can be volatile and regulators are cautious. The outcome of this filing is unknown and may take time to finalize given the ongoing debates around digital assets.

Why Pudgy Penguins Matter

Pudgy Penguins is a well known NFT collection that has gotten a lot of attention since its launch. The brand has done merchandise and partnerships and has a dedicated following. By holding Pudgy Penguins NFTs in a fund structure the ETF will give investors a way to get exposure to a popular NFT project without having to buy or store digital collectibles directly.

Launched December 2024 on Solana, PENGU token expands the ecosystem beyond the original Ethereum-based NFT collection. PENGU holders can vote, participate in community events and other collaborative initiatives to grow the project.

The Bigger Picture

Canary Capital’s activities reflect a broader trend. The firm has filed proposals for various cryptocurrency ETFs, showcasing growing interest in funds that venture into areas beyond mainstream assets like Bitcoin and Ethereum. While optimism is building—particularly under new SEC leadership—skepticism persists about how much market demand exists for specialized products focusing on meme tokens and NFTs.

Nevertheless, each new filing opens up more possibilities in regulated finance. Traditional investors will soon be able to access a diverse range of crypto related products and will have new opportunities in this fast changing landscape.

Looking Ahead

Regardless of what happens with the Pudgy Penguins ETF, this is big. Canary Capital is pushing the boundaries of how traditional finance meets crypto. Stay tuned for more updates and regulatory news, this filing will impact how NFTs and crypto tokens are offered and managed in funds for the next few months and years.



Source link

What the Proposed Canary PENGU ETF Really Means for Retail Investors | NFT News Today

What the Proposed Canary PENGU ETF Really Means for Retail Investors | NFT News Today


When the CBOE filing for the Canary PENGU ETF hit the SEC’s docket in June, PENGU tokens exploded soaring 280% within 24 hours and briefly pushing the collection’s market value past $1 billion.

Key Takeaways

The PENGU ETF would be the first U.S. fund to hold NFTs directly alongside a native token.

Retail investors gain regulated access but also inherit NFT illiquidity risk.

Approval could mainstream on-chain culture and spark fresh liquidity for the entire sector.

Failure would chill future hybrid-asset proposals and clip short-term PENGU momentum.

Policymakers must clarify NFT valuation, custody, and disclosure rules before launch.

Why This Matters Now

We stand at a pivotal moment for digital assets. Spot Bitcoin and Ether ETFs have already opened the floodgates for mainstream crypto exposure. Yet non-fungible tokens remain cordoned off in specialist wallets and Discord channels, inaccessible to most retirement accounts. Canary Capital’s proposal cracks that barrier by packaging Pudgy Penguin NFTs plus up to 95% PENGU tokens into a single, cash-settled share. If the SEC signs off as early as February 2026, it will set legal precedent for every avatar, gaming asset, and digital artwork that follows.

The timing is no accident. A friendlier regulatory mood since the 2024 U.S. election has prompted more than a dozen alt-coin ETF filings, but none pair tokens with NFTs at scale. PENGU therefore tests whether Washington is willing to treat illiquid collectibles as an investable asset class not just a speculative curiosity.

What the Data Tells Us

The filing outlines an 80–95% allocation to PENGU tokens, 5–15% to actual Pudgy Penguin NFTs, and small reserves of SOL and ETH for fees. By mirroring the collection’s on-chain composition, the trust aims to preserve cultural authenticity while smoothing volatility through a larger fungible base.

Market reactions hint at pent-up demand. After the June announcement, PENGU added another 60% in just one week. That price resilience contrasts with the broader NFT slump of 2025, suggesting real appetite for regulated exposure.

Steven McClurg, the founder of Canary and former CIO of Valkyrie, argues that “mainstream investors want to participate in NFT culture without the anxiety of managing private keys.” His track record from overseeing one of the first U.S. spot Bitcoin ETFs adds operational credibility.

The Skeptics’ Case

Critics insist the SEC will balk at pricing unique JPEGs daily. They point to ongoing enforcement against NFT projects that promised revenue sharing, arguing that scarcity and hype make it impossible to establish fair-value marks. They also warn that any redemption freeze could decouple the ETF price from its net asset value, potentially punishing retail holders.

We share the valuation concern, yet the proposal addresses it directly: NAV will employ a three-source weighted methodology, similar to thinly traded micro-cap equities, and NFTs will be stored in insured, multi-sig cold storage.

Moreover, because creations and redemptions occur only in cash, investors never face forced in-kind delivery of hard-to-move collectibles. The structure is imperfect, but it is at least comparable to commodity trusts that hold physical metal in vaults yet quote a daily share price.

What Needs to Happen Next

Regulators: Issue guidance on NFT custody and appraisal before the SEC’s final vote. Clarity will curb legal risk and set universal benchmarks.

Index providers: Publish transparent, rarity-weighted pricing feeds so funds can standardise NAV without leaning on subjective appraisals.

Exchanges: Prepare circuit breakers for hybrid-asset products whose underlying may freeze on-chain while tokens trade on high leverage.

Retail investors: Treat the ETF as a satellite position, limiting it to no more than 5% of a diversified portfolio, until a liquidity history is established.

Call to Action

We urge readers to contact their congressional representatives and demand swift and sensible NFT valuation rules. Without them, the SEC will either green-light a precedent in the dark or slam the door on innovation. The future of NFT finance depends on informed public pressure today.

Frequently Asked Questions

Here are some frequently asked questions about this topic:

What is the PENGU ETF?

It’s a proposed U.S. exchange-traded fund that holds both PENGU tokens and Pudgy Penguin NFTs.

Who is behind the PENGU ETF?

The fund is proposed by Canary Capital, led by former Valkyrie CIO Steven McClurg.

When might the PENGU ETF be approved?

The SEC could make a decision as early as February 2026.

Why is the PENGU ETF significant?

It could become the first regulated vehicle for NFT exposure in U.S. markets, opening access to a broader base of investors.

What risks does the ETF pose?

NFT valuation challenges, illiquidity, and potential price decoupling are key concerns for regulators and investors alike.



Source link

The End Of Humanity? Breaking Down The AI Doomsday Debate

The End Of Humanity? Breaking Down The AI Doomsday Debate


In Brief

Fears that AI could end humanity are no longer fringe, as experts warn that misuse, misalignment, and unchecked power could lead to serious risks—even as AI also offers transformative benefits if carefully governed.

The End Of Humanity? Breaking Down The AI Doomsday Debate

Every few months, a new headline pops up: “AI could end humanity.” It sounds like a clickbait apocalypse. But respected researchers, CEOs, and policymakers are taking it seriously. So let’s ask the real question: could a superintelligent AI actually turn on us?

In this article, we’ll break down the common fears, look at how plausible they actually are, and analyze current evidence. Because before we panic, or dismiss the whole thing, it’s worth asking: how exactly could AI end humanity, and how likely is that future?

Where the Fear Comes From

The idea’s been around for decades. Early AI scientists like I.J. Good and Nick Bostrom warned that if AI ever becomes too smart, it might start chasing its own goals. Goals that don’t match what humans want. If it surpasses us intellectually, the idea is that keeping control might no longer be possible. That concern has since gone mainstream.

In 2023, hundreds of experts, including Sam Altman (OpenAI), Demis Hassabis (Google DeepMind), and Geoffrey Hinton (generally referred to as “the Godfather of AI”), signed an open letter declaring that “mitigating the risk of extinction from AI should be a global priority alongside pandemics and nuclear war.” So what changed?

Models like GPT-4 and Claude 3 surprised even their creators with emergent reasoning abilities. Add to that the pace of progress, the arms race among major labs, and the lack of clear global regulation, and suddenly, the doomsday question doesn’t sound so crazy anymore.

The Scenarios That Keep Experts Up at Night

Not all fears about AI are the same. Some are near-term concerns about misuse. Others are long-term scenarios about systems going rogue. Here are the biggest ones:

Misuse by Humans

AI gives powerful capabilities to anyone, good or bad. This includes:

Countries using AI for cyberattacks or autonomous weapons;

Terrorists using generative models to design pathogens or engineer misinformation;

Criminals automating scams, fraud, or surveillance.

In this scenario, the tech doesn’t destroy us; we do.

Misaligned Superintelligence

This is the classic existential risk: we build a superintelligent AI, but it pursues goals we didn’t intend. Think of an AI tasked with curing cancer, and it concludes the best way is to eliminate anything that causes cancer… including humans.

Even small alignment errors could have large-scale consequences once the AI surpasses human intelligence.

Power-Seeking Behavior

Some researchers worry that advanced AIs might learn to deceive, manipulate, or hide their capabilities to avoid shutdown. If they’re rewarded for achieving goals, they might develop “instrumental” strategies, like acquiring power, replicating themselves, or disabling oversight, not out of malice, but as a side effect of their training.

Gradual Takeover

Rather than a sudden extinction event, this scenario imagines a world where AI slowly erodes human agency. We become reliant on systems we don’t understand. Critical infrastructure, from markets to military systems, is delegated to machines. Over time, humans lose the ability to course-correct. Nick Bostrom calls this the “slow slide into irrelevance.”

How Likely Are These Scenarios, Really?

Not every expert thinks we’re doomed. But few think the risk is zero. Let’s break it down by scenario:

Misuse by Humans: Very Likely

This is already happening. Deepfakes, phishing scams, autonomous drones. AI is a tool, and like any tool, it can be used maliciously. Governments and criminals are racing to weaponize it. We can expect this threat to grow.

Misaligned Superintelligence: Low Probability, High Impact

This is the most debated risk. No one really knows how close we are to building truly superintelligent AI. Some say it’s far off, maybe even centuries away. But if it does happen, and things go sideways, the fallout could be huge. Even a small chance of that is hard to ignore.

Power-Seeking Behavior: Theoretical, but Plausible

There’s growing evidence that even today’s models can deceive, plan, and optimize across time. Labs like Anthropic and DeepMind are actively researching “AI safety” to prevent these behaviors from emerging in smarter systems. We’re not there yet, but the concern is also not science fiction.

Gradual Takeover: Already Underway

This is about creeping dependence. More decisions are being automated. AI helps decide who gets hired, who gets loans, and even who gets bail. If current trends continue, we may lose human oversight before we lose control.

Can We Still Steer the Ship?

The good news is that there’s still time. In 2024, the EU passed its AI Act. The U.S. issued executive orders. Major labs like OpenAI, Google DeepMind, and Anthropic have signed voluntary safety commitments. Even Pope Leo XIV warned about AI’s impact on human dignity. But voluntary isn’t the same as enforceable. And progress is outpacing policy. What we need now:

Global coordination. AI doesn’t respect borders. A rogue lab in one country can affect everyone else. We need international agreements, like the ones for nuclear weapons or climate change, specifically made for AI development and deployment;

Hard safety research. More funding and talent must go into making AI systems interpretable, corrigible, and robust. Today’s AI labs are pushing capabilities much faster than safety tools;

Checks on power. Letting a few tech giants run the show with AI could lead to serious problems, politically and economically. We’ll need clearer rules, more oversight, and open tools that give everyone a seat at the table;

Human-first design. AI systems must be built to assist humans, not replace or manipulate them. That means clear accountability, ethical constraints, and real consequences for misuse.

Existential Risk or Existential Opportunity?

AI won’t end humanity tomorrow (hopefully). What we choose to do now could shape everything that comes next. The danger is also in people misusing a technology they don’t fully grasp, or losing their grip on it entirely.

We’ve seen this film before: nuclear weapons, climate change, pandemics. But unlike those, AI is more than a tool. AI is a force that could outthink, outmaneuver, and ultimately outgrow us. And it might happen faster than we expect.

AI could also help solve some of humanity’s biggest problems, from treating diseases to extending healthy life. That’s the tradeoff: the more powerful it gets, the more careful we have to be. So probably the real question is how we make sure it works for us, not against us.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author


Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

More articles


Alisa Davidson










Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.








More articles



Source link

How AI Platforms Are Reshaping Media: Generative Journalism And Ethical Dilemmas

How AI Platforms Are Reshaping Media: Generative Journalism And Ethical Dilemmas


In Brief

By 2025, generative AI has become a core part of newsroom operations, accelerating content creation while raising critical challenges around accuracy, ethics, and editorial accountability.

How AI Platforms Are Reshaping Media: Generative Journalism And Ethical Dilemmas

By 2025, generative AI has shifted from a testing-phase tool to a regular part of newsroom operations. Many media teams now use AI platforms like ChatGPT, Claude, Google Gemini, and custom editorial models in their daily routines. These systems help write headlines, short summaries, draft articles, and sometimes even full pieces in a set format.

This trend isn’t limited to online-only outlets. Large traditional media companies — from local newspapers to global broadcasters — also use generative models to meet growing content needs. As more stories are published each day and people spend less time on each one, editors lean on AI to speed things up and cut repetitive tasks. It helps them publish faster without increasing staff load.

While AI doesn’t replace deep investigations or serious journalism, it now plays a key role in how modern media works. But with this shift come new challenges — especially around keeping facts accurate, staying accountable, and maintaining public trust.

What Is Generative Journalism?

Generative journalism means using AI and large language models to assist with or fully produce editorial content. That includes tools for news summaries, article drafts, headlines, fact-checking, and even page layout ideas. Some routine sections, like weather updates or financial briefs, are now written entirely by AI.

This approach started with simple templates and data-based outputs like stock reports. But it has grown into a full part of editorial workflows. Media groups such as Bloomberg, Forbes, and Associated Press have used or tested AI in structured areas, where the inputs are reliable and the chance of mistakes is lower.

Generative journalism now spans:

Script generation for video and podcast segments;

Localization of global news;

Repurposing long-form interviews into short content;

Headline testing based on past reader engagement.

The focus shifts from replacing journalists to changing how they work with raw data and early drafts. AI helps as a writing assistant, while people guide the final story.

How AI Changes the Workflow in Newsrooms

Human roles—reporters, editors, producers—traditionally shape every story. Now, AI tools are entering that process at multiple stages:

During research, AI offers background summaries and points to useful sources;

When generating content, it suggests article structures and fresh angles;

In editing, it flags bias, weak logic, or wording issues;

For audience targeting, it adjusts tone and word choice to match segments.

Now, 27% of publishers routinely use AI to create story summaries. 24% use it for translations, and 80% of industry leaders plan to add these tools into workflows before the year’s end. Editors still play a vital role, now acting as quality managers, creative curators, and prompt experts.

AI is also changing newsroom staffing. Roles like “prompt engineer” and “AI ethics advisor” are becoming more common. These new positions ensure that AI support remains accurate, fair, and transparent.

Industry surveys in early 2025 show a sharp rise in AI deployment within global newsrooms:

Despite adoption, many organizations are still in the testing phase. Full automation is rare. Most media outlets now use hybrid systems. They generate content with algorithms and then check and edit it with human oversight.

Ethical Challenges: Bias, Transparency, and Editorial Responsibility

The use of AI in content creation introduces serious ethical considerations. At the center is the question: who is accountable when the story is wrong, misleading, or harmful?

Bias and Framing

AI models inherit biases from their training data—covering social, cultural, and political dimensions. A study of seven major language models showed notable gender and racial bias in generated news articles. This means editorial oversight is essential to check tone, balance, and source choice.

Transparency for Readers

Audiences want to know if content is AI-generated. In a May 2024 EMARKETER survey, 61.3% of U.S. consumers said publications should always disclose AI involvement. Yet disclosure practices vary. Some publishers use footnotes or metadata; others offer no labels. Lack of transparency risks eroding audience trust—especially in political or crisis reporting.

Human Accountability

AI can’t take responsibility for its mistakes. The publisher and editorial team do. That means human oversight must keep pace with AI’s speed and volume. A recent McKinsey survey found that only 27% of organizations review all AI-generated content before it’s approved for public use. This shows the gap: when most outputs are unchecked, errors can slip through—making strong human review even more critical.

Risk of Amplifying Errors

AI can “hallucinate” false information. A 2025 audit found leading AI tools had an 80–98% chance of repeating misinformation on major topics. When unchecked, these errors can spread across outlets and erode credibility.

Case Examples: Where Generative Journalism Works and Where It Doesn’t

The following real-world examples show both sides of generative AI in media. You’ll see how AI can help local newsrooms improve coverage—and how mistakes undermine trust and credibility.

Where It Works

The regional Norwegian newspaper iTromsø developed an AI tool called Djinn with IBM to automate document analysis. Djinn processes over 12,000 municipal records each month, extracting summaries and key issues. Reporters then confirm details and craft final articles. Since implementation, iTromsø and 35 other local titles in the Polaris Media network have increased news coverage and reduced time spent on research by more than 80%.

Scandinavian outlet Aftonbladet launched an AI hub that builds editorial tools. During the 2023 EU election, it deployed “Election Buddy,” a chatbot trained on verified content. It engaged over 150,000 readers and increased site logins by ten times the usual average. Automated story summaries were expanded by readers nearly half the time, indicating deeper engagement.

These cases show how AI helps newsrooms cover more local stories and connect with readers. Editors still check the work to keep quality high.

Where It Failed

In June 2024, Powell Tribune journalist CJ Baker noticed that articles by a competitor contained strangely structured quotes and factual errors. Investigation revealed the reporter used AI to generate false quotes and misinterpret details—for example, attributing statements inaccurately. The story was later removed. This incident underscores how AI-generated errors can propagate without proper review..

In early 2025, King Features Syndicate rolled out a summer reading supplement for newspapers like Chicago Sun-Times and Philadelphia Inquirer. It featured books supposedly by well-known authors like Andy Weir and Min Jin Lee. All books turned out to be imaginary creations of AI. The company removed the supplement, fired the writer, and reinforced policies against AI-generated content without verification

In early 2025, Belgian digital editions of women’s magazines such as Elle and Marie Claire were found publishing AI-generated content under completely fabricated journalist personas—“Sophie Vermeulen,” “Marta Peeters,” and even a “Femke” claiming to be a psychologist. These profiles wrote hundreds of articles on beauty, fashion, wellness and mental health—with no real humans behind them—prompting backlash from Belgium’s Commission of Psychologists. The publisher (Ventures Media) removed the fake bylines and replaced them with disclaimers labeling the pieces as AI-generated.

A Hong Kong-based site, BNN Breaking, was exposed in mid-2024 for using generative AI to fabricate news stories—including fake quotes from public figures—and passing off the content as genuine journalism. A New York Times investigation found that the site increasingly relied on AI to pump out large volumes of misleading coverage. After the exposé, the site was taken offline (then rebranded as “Trimfeed”). Examples included misquotes claiming a San Francisco supervisor “resigned” and false trial coverage for Irish broadcaster Dave Fannin.

In the other examples, AI made mistakes that no one caught in time. Without people checking facts, even small errors hurt trust and damage the outlet’s reputation.

Generative AI now plays a steady role in newsroom work. As more teams adopt these tools, experts, journalists, and regulators look at ways to manage their use and protect quality. Certain shifts are clear already, and others are expected soon.

Regulation Is Incoming

Governments and industry groups are rolling out standards for AI in editorial settings, including labeling requirements and ethical certifications. OpenAI has been vocal in this space—for instance, in their March 13 policy proposal, they described the Chinese AI lab DeepSeek as “state‑controlled” and urged bans on “PRC‑produced” models. Their stance is outlined in OpenAI’s official response to the U.S. OSTP/NSF Request for Information on an AI Action Plan.

Hybrid Workflows

The near future of journalism is not fully automated, but human‑AI hybrid. Writers will increasingly work alongside structured prompting systems, live fact‑check APIs, and voice‑based draft assistants. Microsoft CEO Satya Nadella recently shared:

“When we think about, even, all these agents, the fundamental thing is there’s new work and workflow… I think with AI and work with my colleagues.”

Skills Evolution

New roles are emerging in newsrooms. Prompt engineers with editorial sense. Review editors trained in AI literacy. Content strategists who merge human insight with machine output. Journalism isn’t vanishing. It’s transforming around tools that enable new forms of reporting and publishing.

According to a recent industry survey, about three‑quarters of newsrooms worldwide now use AI in some part of their work. 87% of editorial leaders report that systems like GPT have already reshaped how teams operate and make decisions.

These shifts show that AI-related roles have become part of the core editorial process, not something added on the side.

Generative AI brings speed and volume to journalism. But journalism is not defined by how quickly it is produced. It is defined by how truthfully, responsibly, and contextually it is presented.

Media organizations that adopt AI without clarity on authorship, responsibility, and accuracy risk trading scale for trust. Those who integrate AI with transparent processes, editorial training, and ethical oversight have a real chance to strengthen their content—both in reach and integrity.

In 2025, it’s not the presence of AI in newsrooms that matters most. It’s how it is used, where it is supervised, and what standards it’s bound to. The future of media may be algorithmically accelerated, but the values that hold it together are still human.

Separately, AI continues to show potential in areas beyond newsrooms, including helping professionals and individuals build workflows, simplify tasks, and improve productivity.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author


Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

More articles


Alisa Davidson










Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.








More articles



Source link

How AI Turns Startups Into Hypergrowth Giants In Months

How AI Turns Startups Into Hypergrowth Giants In Months


In Brief

In 2025, AI is transforming how startups grow, allowing small teams to rapidly develop, test, and scale products while introducing new responsibilities around data security, ethics, and responsible use.

How AI Turns Startups Into Hypergrowth Giants In Months

In 2025, going from a small startup to a serious competitor takes months, not years. A few years ago, this kind of growth required large teams and long planning cycles. Now, artificial intelligence changes the pace. What used to be experimental is now a standard part of how modern startups operate.

Today, founders can test ideas, build products, find their audience, and shift strategy faster than ever. They don’t need huge teams to do it. Small groups work closely with AI tools that help across every step — from early concepts to launch and ongoing updates.

This shift is not just about speed. It’s also about changing how work gets done. AI tools now play a central role in product development, design, marketing, and customer support. But with this new power come new responsibilities. Anyone building a business with AI should understand not only what’s possible — but also where things can go wrong. Security risks, data leaks, and compliance challenges are becoming part of everyday startup decisions. For a closer look at these risks, see this overview of 10 critical AI security concerns at work.

Understanding how AI tools shift the rules of work is now essential — not just for founders, but for anyone shaping how modern companies grow.

Why AI Changes the Rules of Startup Growth

In the past, scaling a startup meant hiring large teams, raising big funding rounds, and spending months on market validation. That process was slow and risky.

Today, AI has flipped the script. Startups now use algorithms to automate tasks—operations, customer support, data entry, even parts of product development. This lets them test more ideas, reach more customers, and run global campaigns with far fewer resources.

Machine learning tools analyze customer data in real time and suggest next steps. Language models draft outreach messages and support materials. Predictive analytics highlight markets with the highest growth potential. What was once slow and sequential now happens in fast cycles of testing and editing.

As Sam Altman, CEO of OpenAI, wrote in a blog post:

“The cost to use a given level of AI falls about 10× every 12 months, and lower prices lead to much more use.”

AI has changed not only the tools startups use, but also the tempo and logic of how modern companies grow.

Building Products Faster: AI in Design and Development

Product development used to be one of the slowest parts of startup growth. In 2025, that has changed. AI tools now help design interfaces, write code, test features, and shape product plans based on user feedback.

Some teams can now launch a minimum viable product (MVP) in just a few days. Tools like Figma AI generate layouts from plain text and adjust designs using past behavior data. GitHub Copilot suggests code snippets and completes functions directly in the developer’s workspace.

For fast user testing, founders use Maze. It builds interactive prototypes, finds testers, and returns feedback in just a few hours. Framer AI creates full website layouts straight from prompts. Uizard turns sketches or descriptions into clickable app prototypes, which makes it a common pick for early-stage teams.

At later stages, tools like Notion AI and ClickUp AI support planning. They help draft product specs, organize timelines, and summarize discussions—all in one shared workspace.

ChatGPT works across the process. Teams use it to debug code, write help docs, develop campaign ideas, or prepare investor emails. It acts as a flexible assistant across both technical and creative tasks.

These tools let startups adjust their products faster and stay closer to real user needs. But AI still has limits. Algorithms don’t fully grasp context, tone, or human intent. That’s where careful review and real conversations still matter.

Data-Driven Marketing and Sales With AI

AI has changed how startups approach marketing. What used to rely on gut feeling is now built on data and automation. Founders use AI tools to test ideas faster, reach the right people, and improve results without growing the team.

According to a recent report, 88 % of marketers use AI daily, and 84.9 % say it speeds up delivering quality content.

Key shifts include:

Ad personalization – algorithms adjust ads based on user behavior and preferences. Startups test different messages and formats to see what drives action;

Smarter targeting – predictive models scan large sets of customer profiles to find those most likely to buy soon. Sales teams can focus on warm leads instead of chasing the wrong ones;

AI chatbots – virtual assistants now answer routine customer questions. This frees human teams to handle more complex or sensitive conversations;

Real-time insights – tools track how people interact with websites and apps. Founders see what works instantly and adjust content, design, or offers without delay;

Sales automation – platforms like HubSpot AI, Salesforce Einstein, and Drift help score leads, personalize emails, and map customer behavior through the entire journey.

Instead of relying on guesswork, startups now run faster campaigns, test more often, and adjust based on live feedback.

Statistics: AI Adoption Among Startups in 2025

Recent data shows how AI is reshaping early-stage companies:

71% of companies report using generative AI in at least one business function as of late 2024—up from 33% in 2023.

46,000+ AI-related startups were active globally in 2024, a major leap from just a few years ago.

In early 2025, 305 new AI startups have already launched.

6.6% of U.S. companies now use AI to deliver products or services, up from 3.7% in late 2023.

Nearly 45% of businesses apply AI to three or more operations, showing a shift toward deeper integration.

These figures reflect how fast AI is becoming a core part of startup infrastructure. The trend is expected to accelerate as tools grow more accessible and practical.

Risks of Rapid AI Scaling

While AI helps startups grow quickly, it also brings new risks:

Hidden bias – algorithms learn from past data. That data may include mistakes or unfair patterns. Relying on AI without review can lead to products that fail real customer needs or reinforce harmful stereotypes.

Overreliance on automation – too much trust in automated decisions can leave teams without human insight. That is especially dangerous in health, finance, or social services.

Ethical pitfalls – companies scaling fast may skip checking data sources or fail to train models properly. That can expose user data, spread misleading information, or create unequal outcomes.

Smart founders bake oversight into workflows. They validate AI outputs, test with real users, and involve human experts in critical decisions.

As Satya Nadella, CEO of Microsoft, said on X last week:

“The real benchmark for AI progress is whether it makes a real difference in people’s lives — in healthcare, education, and productivity.”

That quote underlines what matters most: it’s not just scaling fast, it’s scaling responsibly—making tools that serve people well.

Case Studies: Startups That Achieved Hypergrowth With AI

Here are three notable examples from 2024–2025 showing how AI has powered rapid scaling. 

Mandolin (HealthTech, USA)

A U.S.-based healthtech startup focused on AI-driven insurance verification raised $40 million in early 2025. Its automated agents reduced specialty medication verification time from an average of 30 days to just 3 days, significantly improving patient access and clinic throughput. In less than a year since launching, Mandolin expanded to serve over 700 clinics, built with a lean team of 25 employees, showing how operational AI can deliver major impact rapidly.

Airial (TravelTech, USA / India)

Airial converts short-form content—like TikToks and Instagram Reels—into personalized travel itineraries using advanced AI. The startup recently raised $3 million in seed funding led by Montage Ventures. Within two years and a team of just nine engineers, Airial developed a platform that parses user-generated content to generate trip suggestions, with plans to launch a robust mobile app in Q3 2025.

StackBlitz (Dev Tools, USA)

Originally focused on browser-based development, the Singapore-founded StackBlitz launched Bolt, an AI-powered coding platform built on Anthropic’s Sonnet model. Bolt allows non-technical users to create full applications via simple prompts. In just months after launch, it achieved $4 million in annual recurring revenue within 30 days, scaling to $40 million ARR by March 2025—all from a single AI product with high demand.

The New Skills Founders Need in an AI-Driven Landscape

AI is changing how startups work. People remain essential. But now, the most valuable skills look different. Founders who want to act quickly and make smarter choices must learn prompt writing. They also need to verify AI output and decide when to trust their own judgement.

Startups also need team members who understand how AI works under the surface. That includes protecting private data, spotting unfair or biased results, and ensuring AI is used clearly and safely. Roles like AI prompt engineers, AI advisors, and trust & safety managers are appearing in more growing companies. These positions focus on using AI responsibly.

To stay competitive, teams should build these core skills:

Prompt writing — crafting precise requests that lead to helpful output;

Understanding AI tools — knowing what AI does well and where it can err;

Responsible use — catching bias or errors before content goes live;

AI strategy thinking — using AI to plan, compare options, and test faster;

Tool connection — integrating AI into work routines like planning or support;

Clear communication — explaining AI decisions so everyone understands the logic.

As Elon Musk wrote on X:

That simple phrase highlights why founders must master how to “talk” to AI—skills that now shape product, support, and strategy.

With these abilities, small teams can act faster. Early-stage startups gain an edge when they train, hire, or practice these skills. They build stronger companies that use AI well and earn users’ trust.

Future Outlook: Where AI-Driven Startups Are Headed

As AI tools improve, the gap between small teams and large companies will shrink even more. Experts expect more founders to rely on hybrid workflows that mix AI-generated drafts, predictive models, and human review at key steps.

Some governments are already discussing rules on transparency and fairness in AI-driven decisions. These talks may lead to new laws on disclosure, data use, and algorithm checks by 2026.

Startups that act early—embedding ethical checks and strong data security—will likely face fewer conflicts and earn customer trust quickly.

Marc Andreessen, co-founder of a16z, has spoken repeatedly about this shift. He points out that AI lowers the cost of starting and scaling new ventures, making this era unusually open to ambitious founders.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author


Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

More articles


Alisa Davidson










Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.








More articles



Source link

Ondo Finance Acquires Oasis Pro, Alternative Trading System, And Transfer Agent

Ondo Finance Acquires Oasis Pro, Alternative Trading System, And Transfer Agent


In Brief

Ondo Finance has acquired Oasis Pro, including its SEC-registered broker-dealer, ATS, and Transfer Agent, to build a regulated tokenized securities ecosystem for US investors.

Ondo Finance Acquires Oasis Pro, Alternative Trading System, And Transfer Agent

Decentralized finance (DeFi) platform Ondo Finance announced the acquisition of Oasis Pro, which includes its SEC-registered broker-dealer, Alternative Trading System (ATS), and Transfer Agent (TA).

This acquisition is intended to support the development of a regulated ecosystem for tokenized securities, focused on delivering blockchain-based financial instruments to US investors. It represents an expansion of Ondo Finance’s tokenization capabilities by integrating regulatory infrastructure and licenses recognized under US securities law.

Oasis Pro, established in 2019, operates through subsidiaries that are registered with the SEC and are members of FINRA, offering infrastructure to facilitate the compliant issuance and trading of tokenized securities within the United States. The company has received support from investors including Mirae Asset Ventures.

Oasis Pro is among the first US-regulated ATS platforms authorized to settle digital securities transactions in both fiat currency and stablecoins such as USDC and DAI. As a FINRA member since 2020 and an SEC-registered broker-dealer, it has contributed to the evolution of digital asset regulation, including participation in FINRA’s Crypto Working Group aimed at shaping policy around tokenized asset markets.

$18T Opportunity: Ondo Finance Targets Global Growth In Tokenized Equities

Tokenized equities refer to digital representations of publicly listed shares, backed on a one-to-one basis and processed through blockchain infrastructure. This sector is projected by some analysts to surpass $18 trillion in value by the year 2033. Ondo Finance intends to introduce tokenized stock offerings to non-US investors in the near future via its Global Markets platform, in collaboration with various wallet providers, trading platforms, and blockchain protocols.

Currently managing more than $1.4 billion in tokenized assets, Ondo Finance supports a broader global network that includes custodial services, public blockchain infrastructure, and onchain liquidity solutions for real-world asset (RWA) tokenization.

Ondo operates as a blockchain-focused technology company with the stated objective of supporting the shift toward a more open financial system by developing platforms, financial instruments, and infrastructure that facilitate the integration of traditional markets with blockchain networks.

Recently, Ondo Finance introduced the Global Markets Alliance, an industry-wide initiative aimed at fostering common standards and enhancing interoperability in the area of tokenized securities. This newly formed alliance brings together a range of participants from the blockchain ecosystem, including wallets, trading platforms, and custodial service providers such as the Solana Foundation, Bitget Wallet, Jupiter, Trust Wallet, Rainbow Wallet, BitGo, Fireblocks, 1inch, and Alpaca.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author


Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

More articles


Alisa Davidson










Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.








More articles





Source link

Botanix Goes Live With Chainlink Interoperability And Data Stack

Botanix Goes Live With Chainlink Interoperability And Data Stack


In Brief

Botanix integrated Chainlink’s interoperability and data infrastructure at mainnet launch to enable secure, scalable BTCFi through tools like CCIP, Data Feeds, and Data Streams.

Botanix Goes Live With Chainlink Interoperability And Data Stack

Bitcoin-based blockchain, Botanix integrated Chainlink’s standards for onchain finance at the time of its mainnet launch. 

Chainlink, regarded as a foundational component of the blockchain ecosystem, is commonly used to connect blockchain networks with external data sources, other blockchain systems, government infrastructure, and enterprise platforms. The technology has facilitated a substantial volume of transaction value across the blockchain sector, contributing to applications in decentralized finance (DeFi), banking, tokenized real-world assets, cross-chain activity, and other related areas. 

Chainlink’s suite of tools—including its Cross-Chain Interoperability Protocol (CCIP), Data Streams, and Data Feeds—has now been designated as the official cross-chain and data infrastructure for the Botanix network. These capabilities enable developers to create secure applications that aim to broaden Bitcoin’s use in decentralized finance (DeFi) while supporting the broader reach of Chainlink Scale participants in the multichain landscape. Data Streams and Data Feeds additionally offer reliable onchain data, supporting the development of the next generation of Bitcoin-based financial applications. 

As the broader multichain environment continues to grow, dependable and secure interoperability frameworks are becoming increasingly important for creating effective, production-ready decentralized applications. Through the use of Chainlink’s CCIP, developers on Botanix gain access to infrastructure that has been rigorously tested in real-world conditions and is designed to support secure communication and asset transfer across different blockchain networks. CCIP’s reliability is based on Chainlink’s established history of safeguarding high-value decentralized systems, aiming to ensure confidence in cross-chain operations.

CCIP: Bringing Security And Safe Token Transfers To Botanix

The CCIP offers several notable features that are designed to support secure and efficient blockchain interactions. Its security framework is grounded in the Chainlink Decentralized Oracle Network (DON), an infrastructure that has previously secured over $75 billion in total value locked (TVL) within decentralized finance and facilitated more than $22 trillion in onchain transaction volume since early 2022. One of CCIP’s capabilities is the facilitation of secure token transfers using Cross-Chain Tokens (CCTs), which function independently of token-specific logic. Developers can utilize pre-audited token pool contracts to convert any ERC20-compatible token into a CCT, or alternatively, create customized token pool contracts to address specific requirements. These transfers do not necessitate the integration of CCIP-specific code into the original token contract. Additional safeguards, such as configurable rate limits and Smart Execution mechanisms, are included to help maintain reliable execution even under network congestion conditions.

Another function, programmable token transfers, allows value to be transferred across blockchain networks alongside accompanying instructions that guide the destination smart contract on how to process incoming tokens. This feature supports the execution of multi-step, multi-party, and multi-chain operations within a single, atomic transaction. CCIP is also structured to be adaptable and sustainable for future development, with the flexibility to incorporate new blockchain networks, tokens, and advanced security measures over time.

In addition to cross-chain interoperability, consistent access to verifiable market data is crucial for developing secure decentralized applications. Botanix’s integration of Chainlink Data Feeds aims to provide developers with a reliable base for creating secure financial applications, contributing to growth within the ecosystem. As Bitcoin-based financial applications (BTCFi) on Botanix evolve to handle higher transaction volumes and increasingly automated processes, system responsiveness becomes more important. To address this, Chainlink Data Streams offers infrastructure that delivers low-latency data, targeting centralized exchange-like performance while preserving transparency and modularity within BTCFi platforms.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author


Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.

More articles


Alisa Davidson










Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.








More articles





Source link

Popular Posts

My Favorites