Web3

Home Web3 Page 153

How Tariffs Are Changing the Global GPU Market: A Move Towards Decentr

0
How Tariffs Are Changing the Global GPU Market: A Move Towards Decentr


Recent implementations of significant economic tariffs, particularly between the United States and China, have introduced substantial volatility into the global technology landscape. The imposition of broad US import tariffs, including steep rates on Chinese goods, and subsequent retaliatory measures by China, have created considerable market turmoil. While a subsequent 90-day pause on most retaliatory tariffs (notably excluding China) provided temporary market relief, evidenced by significant stock market gains, the underlying instability and the persistence of base tariffs underscore ongoing risks.

This report analyzes the repercussions of this tariff environment on the global graphics processing unit (GPU) industry based on available information. The analysis indicates that tariff-induced disruptions expose vulnerabilities within traditional, centralized GPU manufacturing and supply chain models. These disruptions manifest as increased production costs, potential delays, and strategic challenges for key industry players. Consequently, the Artificial Intelligence (AI) sector, heavily reliant on GPU compute power, faces rising costs, potentially hindering innovation, especially for smaller entities. Against this backdrop, the report examines the argument presented for decentralized, borderless GPU-as-a-Service platforms, exemplified by Spheron Network, as a potentially resilient and cost-effective alternative infrastructure model better suited to navigate the current climate of geopolitical and economic uncertainty.

The Evolving Tariff Landscape: A Catalyst for Uncertainty

A series of tariff actions initiated by the United States administration under President Donald Trump have significantly impacted the global economic environment. These actions target numerous industries and countries. Understanding these measures and their current status is crucial for assessing their impact on technology supply chains, particularly for GPUs.

Overview of Introduced Tariffs

The tariff actions described include several key components. Firstly, the U.S. administration implemented 10% general import tariffs targeting goods from 86 countries worldwide. Specific measures were also directed at China, resulting in tariffs that brought the total effective rate on certain goods from China to a substantial 145%. The administration framed these actions as efforts to address trade imbalances and encourage domestic manufacturing. In response to the U.S. measures, China announced its retaliatory tariffs. Initially set at 84% on targeted U.S. goods, these increased to 125%. The magnitude of these percentages underscores the severity of the trade dispute and its potential to disrupt established economic flows.

The Temporary Pause and Lingering Instability

President Trump announced a 90-day pause on retaliatory tariffs for most countries to de-escalate tensions. This move triggered a significant positive reaction in financial markets, reportedly initiating a $3.5 trillion inflow back into the stock market. Major indices saw substantial gains: the S&P 500 erased earlier losses to rise by 9.5%, the Dow Jones Industrial Average increased by 2,000 points (or 5%), and the Nasdaq composite climbed 6.8%.

However, this pause carries critical caveats. Firstly, it explicitly excludes China, meaning the high tariff rates between the world’s two largest economies remain primarily in effect. Secondly, the 10% base general import tariff imposed by the U.S. on numerous countries also remains active. Therefore, while the pause on retaliatory measures offered temporary relief, it did not resolve the core trade conflicts or remove all tariff barriers. The selective nature of the relief, focusing away from China, inadvertently sharpens the focus on the unresolved trade friction involving arguably the most critical single nation within the global electronics supply chain, thereby maintaining significant underlying risk. Furthermore, the strong positive market reaction highlights the sensitivity to tariff news, yet contrasts sharply with the persistent risks posed by the remaining base tariffs and the unresolved China situation. This suggests that short-term market sentiment may not fully capture the long-term structural vulnerabilities introduced by this new era of trade friction. The instability caused by the initial imposition of tariffs, the uncertainty surrounding their potential reinstatement after the 90-day pause, and the ongoing situation with China continue to underscore the vulnerability of global markets and supply chains.

Summary of Mentioned Tariffs and Status

Tariff Type

Rate(s) Mentioned

Target(s)

Current Status (as described)

US General Import Tariff

10%

Goods from 86 countries

Active

US Tariffs on China

Totaling 145%

Goods from China

Active (Excluded from pause)

China Retaliatory Tariffs

84%, then 125%

Goods from the US

Active (Excluded from pause by the US)

US Retaliatory Tariff Pause

N/A

Retaliatory tariffs against most countries

Active (90-day duration from announcement)

GPU Supply Chain Under Strain: Geopolitics Hits Manufacturing

The introduction and potential continuation of tariffs raise critical questions about the resilience of the global GPU supply chain. Given the geographic concentration of manufacturing and the sector’s reliance on specific materials, it appears particularly exposed to these geopolitical pressures, especially with China remaining outside the scope of the temporary tariff pause.

Vulnerability of Key Manufacturing Hubs

The GPU supply chain relies heavily on manufacturing and component sourcing from specific regions, many of which are now directly affected by the tariff regimes. Key hubs identified as burdened include not only China, which faces ongoing high tariffs, but also Taiwan, South Korea, and Vietnam. These nations are central to the production of critical GPU components and final assembly. The imposition of tariffs, or the threat of their reinstatement, creates significant disruption risks. Expected consequences include increased production costs, as tariffs add direct expenses to imported components or materials; potential delays in shipments due to new customs procedures or supply chain adjustments; and a forced reevaluation of manufacturing strategies by leading semiconductor and technology companies seeking to mitigate these risks. The dependence on these specific geographic locations highlights a critical vulnerability in the existing supply chain structure.

Material-Specific Impacts: The Aluminum Example

The impact of tariffs extends beyond finished components to basic materials essential for manufacturing. Aluminum, for instance, has been targeted with particularly high tariffs of 25%. This is significant because aluminum is described as a fundamental material used in constructing various GPU components, likely including heat sinks, frames, and other structural elements. The direct consequence of tariffs on such a core material is an anticipated increase in the production costs for GPUs. These higher manufacturing costs are expected to be passed down the value chain, ultimately leading to higher retail prices for consumers and enterprise buyers. This ripple effect has downstream implications, potentially increasing operational costs for large-scale users of GPUs, such as cloud computing providers and AI enterprises that rely on vast arrays of these processors within their data centers.

Semiconductor Exemptions vs. Broader Electronics Impact

While it is noted that semiconductors, the core processing units within GPUs, were initially exempted from some tariff actions, this has not insulated the GPU industry from negative effects. The broader electronics industry, which encompasses the assembly and other components that constitute a complete GPU product, has been significantly affected. Tariffs on materials like aluminum, other electronic components sourced from affected regions, and the general market uncertainty contribute to the overall impact. Although the stated rationale for the tariffs by the U.S. administration involves addressing trade imbalances and encouraging domestic production, the immediate repercussions observed include significant turmoil in global technology markets and sharp declines in the stock prices of major GPU and AI-related companies. This demonstrates that disrupting the supply chain, even if core chips are initially spared, can have far-reaching consequences. The interconnected nature of the supply chain means that vulnerabilities exist at multiple points – targeting essential materials or key manufacturing hubs can create bottlenecks and cost increases just as effectively as targeting the semiconductor itself.

Semiconductor and GPU Company Responses: Navigating the Turbulence

Major corporations within the semiconductor and GPU ecosystem are directly confronting the challenges posed by the tariff environment. Their responses involve navigating increased costs, managing investor concerns reflected in stock price volatility, and undertaking significant strategic shifts in manufacturing footprints.

Impacts on Industry Giants (TSMC, Samsung)

Leading chip manufacturers, such as Taiwan Semiconductor Manufacturing Company (TSMC) and Samsung Electronics, are reportedly grappling with the compounded effects of these tariffs. These effects impact both their operational efficiency and overall profitability. The sensitivity of the market to these geopolitical actions was starkly illustrated when TSMC’s shares reportedly dropped 15% as of April 8, following the unveiling on April 2 of a potential 32% U.S. tariff on imports from Taiwan. This rapid and significant decline underscores how tightly financial markets are linking tariff announcements directly to the perceived value and future earnings potential of critical players in the semiconductor supply chain. In response to these pressures, companies like TSMC have announced significant investments aimed at diversifying their manufacturing base, including major new production facilities in the U.S. However, these strategic moves are not without their own tariff-related complications. Operational costs associated with relocating production facilities, potentially including the import of specialized manufacturing equipment or materials needed for construction and setup, are themselves subject to tariffs, which can dramatically increase the projected expenses of such initiatives. This suggests that geographic relocation, while a logical long-term strategy, is complex, costly, and not an immediate or complete solution to escaping tariff pressures.

NVIDIA’s Strategic Maneuvering

NVIDIA, identified as the global leader in the GPU market, is also taking steps to adapt. The company has announced plans to shift some of its manufacturing operations to the United States. Specifically, NVIDIA revealed it was finalizing plans to produce its advanced Blackwell AI GPU chip at TSMC’s new plant in Arizona, with production anticipated to begin in 2025. This move is explicitly framed as an attempt by NVIDIA to mitigate the potential negative impacts of the ongoing tariff situation on its business operations and supply chain resilience. This represents a long-term strategic adjustment aimed at de-risking its manufacturing dependence on regions currently embroiled in trade disputes. However, the 2025 timeline highlights the difference between long-term strategic planning and the immediate financial and operational headwinds created by the current tariff environment. While such moves position the company for greater future resilience, they do not alleviate the near-term cost pressures and market volatility impacting the industry today.

Implications for the Artificial Intelligence Sector: Rising Compute Costs

With its immense appetite for computational power, the AI industry is particularly sensitive to GPU market disruptions. Tariffs’ effects ripple through AI development costs, data center operations, and the accessibility of cutting-edge technology, potentially creating divergent outcomes for different players within the sector.

The Direct Cost Impact: Expensive GPUs

The most immediate consequence of tariffs impacting the GPU supply chain is an anticipated rise in GPU prices. Higher production costs, stemming from tariffs on materials like aluminum and components sourced from affected regions, are often passed through the value chain to the end consumer. This leads to more expensive GPUs in the retail and enterprise markets. Such price surges have the potential to dampen demand, particularly within sectors that rely heavily on GPU acceleration, including AI research and deployment, high-performance computing, gaming, and data centers. Analysts have expressed concern that these increased costs could make AI development significantly more expensive. This, in turn, carries the risk of hindering the pace of innovation and potentially slowing growth in a field reliant on accessible, powerful computing resources. However, there appears to be some uncertainty regarding the demand elasticity.

Cloud Providers and Data Center Expenses

A significant challenge arises from the impact of tariffs on the operational costs of cloud computing providers, which serve as the primary source of GPU infrastructure for many AI companies. Rising costs for GPUs themselves, coupled with potential increases in data center construction and maintenance expenses (potentially linked to tariffs on materials like aluminum used in building infrastructure and cooling systems), contribute to higher overall operating expenditures for these providers. AI enterprises are identified as prominent clients of large, centralized cloud data centers housing thousands of high-performance GPUs. To maintain profitability amidst rising input costs, these cloud providers are expected to increase their service prices. This makes their already expensive GPU compute instances even less affordable, directly impacting the budgets of AI companies relying on their services. The structure of the cloud market thus acts as a direct transmission mechanism, channeling tariff-related cost increases from the hardware supply chain directly to AI end-users.

Differential Impact: Large Enterprises vs. Startups

The burden of rising GPU compute costs is unlikely to be distributed evenly across the AI landscape. Large, well-funded AI organizations, such as OpenAI, are often better positioned to secure the necessary GPU resources, even at inflated prices, due to their scale, existing relationships with hardware vendors, and financial capacity. In contrast, smaller companies and AI startups, particularly those operating in emerging areas like the Web3 sector, may encounter significant obstacles in accessing the top-quality GPU chips required for advanced AI workloads like Large Language Model (LLM) training, generative AI development, and AI agent training. This disparity threatens to exacerbate existing inequalities within the AI field, potentially concentrating cutting-edge development capabilities within larger organizations and stifling innovation from smaller, more agile players who may be priced out of accessing essential compute resources. Faced with these challenges, AI enterprises are reportedly exploring alternative ways to secure GPU resources.

Decentralized GPU-as-a-Service: The Spheron Network Model

Amidst the challenges posed by tariffs to traditional GPU supply chains and centralized cloud infrastructure, the concept of decentralized GPU compute networks is presented as a viable alternative. Spheron Network is highlighted as a specific example of this approach, leveraging a Decentralized Physical Infrastructure Network (DePIN) model.

Introducing the Concept: DePIN and Decentralization

Spheron Network offers a decentralized cloud computing infrastructure designed to provide a “tariff-proof” service for enterprises requiring premium GPU computing, particularly in the AI and gaming sectors. The foundation of this offering is its DePIN stack. The core principle involves creating a globally distributed network of GPU resources, rather than concentrating them in large, geographically fixed data centers. This inherently borderless structure is positioned as a key advantage in circumventing localized geopolitical tensions and economic barriers like tariffs. The fundamental premise is that extreme geographic distribution mitigates the risk associated with any single country or region facing trade restrictions or instability; issues in one location are less likely to cripple the entire network’s availability or cost structure.

Spheron’s Infrastructure Components

The Spheron Network’s scale aims to establish its capacity to serve enterprise needs. It reportedly comprises over 10,400 high-performance GPUs distributed globally. This pool includes access to sought-after high-end chips and 35.2K MAC Chips. Additionally, the network incorporates over 768K CPUs. To ensure reliability and consistent service quality across this distributed infrastructure.

Operational Model: Resource Pooling and Host Incentives

The operational model of Spheron Network relies on aggregating compute resources from a wide array of providers. It employs a system where anyone meeting the requirements can become a “Cloud Host” by contributing their high-performance GPU compute capacity to the network. In return for providing these services, hosts are rewarded with FN Points. This incentive structure is crucial for attracting and retaining a diverse, global pool of GPU providers, thereby enabling the network’s scale and distributed nature. Spheron then utilizes decentralized resource pooling mechanisms to efficiently channel this aggregated computing power directly from these various sources to clients. This model is claimed to maximize the utilization of connected GPUs and enhance overall cost-efficiency compared to traditional centralized approaches.

Spheron Network: Offerings and Claimed Advantages

Building on its decentralized infrastructure model, Spheron Network promotes specific offerings and advantages to attract users, particularly those impacted by the rising costs and uncertainties in the traditional cloud market.

Specific High-End GPU Offerings and Pricing

A key part of Spheron’s value proposition involves offering access to cutting-edge AI chips at highly competitive prices.

By highlighting low hourly rates for these specific, high-demand chips (essential for advanced AI workloads), Spheron directly contrasts its pricing with the anticipated cost increases from incumbent providers affected by tariffs. Aggressive pricing on the latest hardware is a strategic tool to capture market share from customers feeling the financial pressure of the current geopolitical environment.

Core Claimed Advantages

Beyond specific pricing, Spheron emphasizes several core advantages stemming from its decentralized architecture, positioning itself as particularly well-suited to the current unstable global trade environment:

Tariff-Proof Service: The global, distributed nature of the network is claimed to insulate it from country-specific tariffs and trade disputes.

Cost-Efficiency: Achieved through decentralized resource pooling, potentially higher GPU utilization rates compared to centralized models, and lower overhead associated with managing massive data centers. This enables “unbeatable pricing.”

Resilience: The distributed infrastructure is presented as less vulnerable to single points of failure, whether technical, economic, or geopolitical.

Scalability: The model allows for aggregating resources globally, suggesting inherent scalability supported by its large claimed network size.

Predictable Pricing: Offered as a contrast to the potential for sudden price hikes from centralized providers who may need to pass on tariff-related costs or react to supply chain disruptions.

These claimed benefits collectively form the argument that Spheron provides a more stable, affordable, and reliable source of GPU compute in an increasingly unpredictable world.

Centralized vs. Decentralized Cloud Architectures: A Comparative View

Centralized cloud providers are facing inherent structural challenges exacerbated by geopolitical instability. These providers typically concentrate vast GPU resources within large, capital-intensive data centers in specific geographic regions, making them susceptible to local operating costs, regulations, and tariffs. A key criticism is inefficiency, claiming that these providers often suffer from low GPU utilization rates, cited as sub-30%. This implies that a significant portion of expensive GPU hardware sits idle, contributing to higher operational costs. To maintain profitability under these conditions, centralized providers reportedly charge “hefty service fees” and may engage in over-provisioning (maintaining excess capacity) to guarantee resource availability, further adding to costs. This cost structure, it is argued, makes high-performance GPU compute increasingly unaffordable, especially for smaller AI companies and startups. Furthermore, their centralized nature and exposure to supply chain fluctuations make them vulnerable to sudden price shifts driven by external factors like tariffs.

Decentralized Model Advantages (as presented by Spheron)

In contrast, Spheron Network’s decentralized model is presented as an “affordable, democratized” alternative. By pooling resources from numerous distributed Cloud Hosts incentivized by point rewards, the model aims to maximize GPU utilization, channeling compute power directly to where it is needed. This focus on high utilization is a fundamental driver of cost efficiency, allowing Spheron to offer significantly lower prices. The claim of higher utilization directly addresses the purported inefficiency of the centralized model, suggesting less waste and a better return on hardware investment, translating to savings for the end-user. Global distribution provides inherent resilience against localized disruptions, including geopolitical and economic volatility such as tariffs. This resilience also contributes to better pricing stability and predictability for customers.

Comparison of Cloud Models

Feature

Centralized Model

Decentralized Model

Infrastructure

Concentrated in large, expensive data centers

Globally distributed network of individual providers (DePIN)

GPU Utilization

Claimed low (sub-30%), leading to idle resources

Claimed high, maximized via resource pooling

Cost Structure

High operational costs, requires hefty fees, potential over-provisioning

Lower overhead, cost-efficient due to high utilization

Pricing

Expensive, potentially volatile due to external factors (tariffs)

Affordable (“unbeatable”), claimed predictable pricing

Geopolitical Resilience

Vulnerable to localized tariffs, regulations, disruptions

Resilient due to borderless, distributed nature (“tariff-proof”)

Accessibility

Can be unaffordable for smaller entities

More accessible, “democratized” approach

Geopolitical Instability as a Catalyst for Decentralization: Spheron’s Opportunity

The era of predictable global trade and stable supply chains may be over, at least for the foreseeable future. Even with temporary pauses on specific tariffs, the underlying tensions and the demonstrated willingness to use tariffs as policy tools have introduced a lasting sense of market instability and uncertainty. Businesses, it suggests, can no longer rely solely on the established trends and agreements of the past. Evidence for this heightened volatility is drawn from the behavior of financial markets since the tariff escalations began in early April, with the stock prices of central AI and GPU companies reportedly exhibiting wild swings that are more characteristic of volatile cryptocurrency markets than traditional technology stocks. This suggests a fundamental shift in market perception of risk within the sector.

The Risk for AI Enterprises

This pervasive uncertainty poses a serious operational risk for AI enterprises. These organizations often depend on consistent access to reliable, high-performance GPU compute for their core development and deployment activities. The potential for centralized cloud providers to abruptly change pricing in response to tariff impacts or face supply chain disruptions that limit GPU availability represents a significant vulnerability. Such disruptions could derail projects, inflate budgets unexpectedly, and hinder competitiveness, particularly for companies without the resources to buffer against such shocks.

Spheron’s Strategic Inflection Point

This heightened risk and uncertainty climate is framed not merely as a challenge, but as a “powerful inflection point” and a strategic “opportunity” for decentralized platforms like Spheron Network. As businesses are forced to reevaluate their infrastructure strategies for better stability and cost predictability, Spheron’s model is presented as a “compelling alternative.” Its core claimed attributes – resilience derived from decentralization, cost-efficiency enabled by higher utilization and lower overhead, inherent scalability, a borderless global reach insulating it from localized issues, and predictable pricing – directly address the pain points created by the current geopolitical environment. The narrative strategically reframes the adverse market conditions (instability, volatility, rising costs) as positive drivers, creating demand for the specific solutions offered by the decentralized model.

Future Outlook and Call to Action

Spheron Network is well-positioned to become a critical “infrastructure backbone” for AI enterprises seeking stability in a disrupted world. The potential for forging strategic partnerships, particularly with organizations directly impacted by tariffs, is highlighted as a means to accelerate adoption and solidify Spheron’s market position.

Based on the provided information, the analysis indicates that recent tariff implementations, particularly between the US and China, have significantly disrupted the global GPU industry. These disruptions manifest as heightened supply chain vulnerabilities for key manufacturing regions and materials like aluminum, leading to increased production costs for major players such as TSMC and NVIDIA, despite strategic efforts to relocate manufacturing. Consequently, the AI sector faces rising GPU compute costs, primarily transmitted through centralized cloud providers, which disproportionately affect smaller companies and startups, potentially stifling innovation.

The central argument is that this sustained geopolitical and economic uncertainty environment exposes the inherent risks of relying solely on centralized infrastructure models. The volatility and cost unpredictability associated with traditional supply chains and data centers create a compelling case for alternatives. Decentralized GPU-as-a-Service platforms, exemplified by Spheron Network’s DePIN model, are positioned as a timely solution. By leveraging global resource distribution, incentive mechanisms for participation, and potentially higher utilization rates, these platforms claim to offer greater resilience, cost-efficiency, and pricing predictability. Therefore, the current market instability, driven by tariff actions, is framed not just as a crisis for traditional models but as a significant market opportunity, validating and potentially accelerating the adoption of decentralized compute infrastructure within the AI and broader technology sectors. Spheron Network is presented as being strategically positioned to benefit from this shift, offering a potential haven of stability and affordability in an increasingly turbulent global landscape.



Source link

Advent Technologies Holdings Receives Nasdaq Notice on Late Filing of its Form 10-K | Web3Wire

0
Advent Technologies Holdings Receives Nasdaq Notice on Late Filing of its Form 10-K | Web3Wire


LIVERMORE, Calif., April 16, 2025 (GLOBE NEWSWIRE) — (NASDAQ: ADN). Advent Technologies Holdings, Inc. announced today that, as expected, it received a notice from Nasdaq on April 16, 2025, notifying the Company that it is not in compliance with the periodic filing requirements for continued listing set forth in Nasdaq Listing Rule 5250(c)(1) because the Company’s Annual Report on Form 10-K for the year ended December 31, 2024 (“Fiscal Year 2024 10-K”) was not filed with the Securities and Exchange Commission (the “SEC”) by the required due date of March 31, 2025.

This Notice received from Nasdaq has no immediate effect on the listing or trading of the Company’s shares. Nasdaq has provided the Company with 60 calendar days, until Sunday, June 16, 2025, to submit a plan to regain compliance. If Nasdaq accepts the Company’s plan, then Nasdaq may grant the Company an exception until October 13, 2025, as instructed by the Letter, to regain compliance with the Nasdaq Listing Rules.

The Company expects and intends to submit to Nasdaq the compliance plan by April 30, 2025. The Company continues to work diligently to complete its Fiscal Year 2024 10-K and continues to target filing its Fiscal Year 2024 10-K with the SEC by April 30, 2025, with subsequent periodic filings made on-time, after which the Company anticipates maintaining compliance with its SEC reporting obligations.

This announcement is made in compliance with Nasdaq Listing Rule 5810(b), which requires prompt disclosure of receipt of a deficiency notification.

About Advent Technologies Holdings, Inc.Advent Technologies Holdings, Inc. is a U.S. corporation that develops, manufactures, and assembles complete fuel cell systems as well as supplying customers with critical components for fuel cells in the renewable energy sector. Advent is headquartered in Livermore, CA, with offices in Athens, Patras and Kozani Greece. With approximately 150 patents issued, pending, and/or licensed for fuel cell technology, Advent holds the IP for next-generation HT-PEM that enables various fuels to function at high temperatures and under extreme conditions, suitable for the automotive, aviation, defense, oil and gas, marine, and power generation sectors. For more information, visit http://www.advent.energy.

Cautionary Note Regarding Forward-Looking StatementsThis press release includes forward-looking statements. These forward-looking statements generally can be identified by the use of words such as “anticipate,” “expect,” “plan,” “could,” “may,” “will,” “believe,” “estimate,” “forecast,” “goal,” “project,” and other words of similar meaning. Each forward-looking statement contained in this press release is subject to risks and uncertainties that could cause actual results to differ materially from those expressed or implied by such statement. Applicable risks and uncertainties include, among others, the Company’s ability to maintain the listing of the Company’s common stock on Nasdaq; future financial performance; public securities’ potential liquidity and trading; impact from the outcome of any known and unknown litigation; ability to forecast and maintain an adequate rate of revenue growth and appropriately plan its expenses; expectations regarding future expenditures; future mix of revenue and effect on gross margins; attraction and retention of qualified directors, officers, employees and key personnel; ability to compete effectively in a competitive industry; ability to protect and enhance Advent’s corporate reputation and brand; expectations concerning its relationships and actions with technology partners and other third parties; impact from future regulatory, judicial and legislative changes to the industry; ability to locate and acquire complementary technologies or services and integrate those into the Company’s business; future arrangements with, or investments in, other entities or associations; and intense competition and competitive pressure from other companies worldwide in the industries in which the Company will operate; and the risks identified under the heading “Risk Factors” in Advent’s Annual Report on Form 10-K filed with the Securities and Exchange Commission (“SEC”) on August 13, 2024, as well as the other information filed with the SEC. Investors are cautioned not to place considerable reliance on the forward-looking statements contained in this press release. You are encouraged to read Advent’s filings with the SEC, available at http://www.sec.gov, for a discussion of these and other risks and uncertainties. The forward-looking statements in this press release speak only as of the date of this document, and the Company undertakes no obligation to update or revise any of these statements. Advent’s business is subject to substantial risks and uncertainties, including those referenced above. Investors, potential investors, and others should give careful consideration to these risks and uncertainties.

Advent Technologies Holdings, Inc.press@advent.energy

About Web3Wire Web3Wire – Information, news, press releases, events and research articles about Web3, Metaverse, Blockchain, Artificial Intelligence, Cryptocurrencies, Decentralized Finance, NFTs and Gaming. Visit Web3Wire for Web3 News and Events, Block3Wire for the latest Blockchain news and Meta3Wire to stay updated with Metaverse News.



Source link

Bybit to end multiple Web3 services in strategic pivot

0
Bybit to end multiple Web3 services in strategic pivot



Crypto exchange Bybit announced that it will discontinue a wide range of its Wweb3 products and services by the end of May, according to an April 16 notice.

The exchange said the decision is part of a shift in its operational focus as it enters a new phase of growth and innovation.

Discontinued services

Among the services being phased out are Bybit’s Cloud Wallet and Keyless Wallet, both of which will become unavailable after May 31, 2025.

Users have been urged to transfer all assets, including tokens, NFTs, and inscription assets, from these wallets to either their Bybit Funding Account or Seed Phrase Wallet before the deadline. Failure to do so may result in delayed access and require additional identity verification.

Several Web3 trading features will also be shut down at the end of May, including Bybit’s DEX Pro platform, Swap & Bridge service, and the broader NFT Marketplace will be discontinued.

Marketplace users who fail to withdraw NFTs in time may permanently lose access to their assets, as the platform will be taken offline and remaining assets will no longer be retrievable.

In addition to these changes, Bybit has already ended support for several other offerings, including Inscriptions, NFT Pro, ApeX, Buy Crypto, and its Initial DEX Offering (IDO) platform as of April 8.

The company’s Web3 Points program will also be discontinued on April 28.

Streamlining its focus

Despite the scale of the shutdown, some services remain unaffected. Users will still have access to the Airdrop Arcade, staking products, and decentralized applications (DApps). Seed Phrase Wallets also remain fully operational.

Bybit framed the decision as a proactive move to streamline its Web3 offerings and concentrate on delivering a more efficient and user-centric experience. The company said that discontinuing multiple services would allow it to refocus resources and support long-term development within the onchain ecosystem.

For users of the Keyless Wallet, a private key export feature is expected to launch soon, enabling continued access to wallets through third-party platforms. Bybit reiterated that it does not store users’ private keys, and once exported, the Keyless Wallet will be permanently deleted.

The company encouraged all users to begin asset transfers immediately and ensure their wallets contain sufficient gas tokens to complete transactions.



Source link

Futureverse Acquires Candy Digital, Taps DC Comics and Netflix IP to Boost Metaverse Strategy – Decrypt

0
Futureverse Acquires Candy Digital, Taps DC Comics and Netflix IP to Boost Metaverse Strategy – Decrypt



Futureverse, an AI and metaverse tech firm, announced Wednesday it has acquired Candy Digital, a digital collectibles platform, even as interest in the sector has waned in recent years.

The deal gives Futureverse access to the latter’s sprawling blue-chip brand portfolio, including Major League Baseball, Netflix, and DC Comics.

Those brands “already sit at the intersection of digital and real-world fandom,” Aaron McDonald, co-founder and CEO of Futureverse, said in a statement shared with Decrypt.

The acquisition gives Futureverse access to Candy’s portfolio of over 4 million digital collectibles and customer base of 1.5 million accounts, with the AI and metaverse firm planning to gradually integrate its tech with Candy’s high-profile intellectual property partnerships.

Futureverse claims that bringing in these brands and stories could help it make and “build immersive experiences” to “enhance brand loyalty.”

Since its formation in late 2022 and after its $54 million Series A almost two years ago, the company has positioned itself as a leader in developing infrastructure for what it calls the “open metaverse.”

The company was last valued in 2023 at roughly $1 billion, making it the first Māori-founded unicorn, according to Callaghan Innovation, New Zealand’s government innovation agency.

Candy’s content library will be integrated into The Root Network, Futureverse’s layer-1 blockchain designed to address intellectual property protection concerns. Futureverse believes the integration could help brands signed with Candy Digital to “safely use” and protect their IP.

‘Natural move’

The acquisition was a “natural move” for Candy Digital, its co-founder and senior vice president Matt Novogratz said in the statement. Futureverse has “patented technology” that has helped define how brands engage people in the digital world, “where most interactions take place,” he adds.

But skepticism over the term “metaverse,” fueled by the general market decline of NFTs, raises concerns for projects like Futureverse.

From sales of premier collections at a loss, to major tech firms shutting down their NFT projects, to marketplaces shifting to AI instead, those trends in the crypto industry have waned over the years.

Still, Futureverse has expanded rapidly through strategic acquisitions and concurrent partnerships.

Those included IP collaborations with Warner Bros., FIFA, and Reebok, among others. Meanwhile, the company’s technology stack includes generative AI tools, blockchain infrastructure, and developer tools for creating interoperable digital experiences.

Candy Digital’s investors, including Galaxy Digital, ConsenSys Mesh, and Microsoft, will join the Futureverse ecosystem as part of the acquisition.

The exact financial terms of the deal were not disclosed. Decrypt did not immediately receive responses from Futureverse on that matter.

Disclaimer: ConsenSys is one of 22 investors in an editorially independent Decrypt.

Edited by Sebastian Sinclair

GG Newsletter

Get the latest web3 gaming news, hear directly from gaming studios and influencers covering the space, and receive power-ups from our partners.



Source link

Customer Relationship Management Market Share Growing at a CAGR of 11.1% Reach USD 96.39 Billion by 2027. | Web3Wire

0
Customer Relationship Management Market Share Growing at a CAGR of 11.1% Reach USD 96.39 Billion by 2027. | Web3Wire


Allied Market Research published a new report, titled, ” Customer Relationship Management Market Share Growing at a CAGR of 11.1% Reach USD 96.39 Billion by 2027.” The report offers an extensive analysis of key growth strategies, drivers, opportunities, key segment, Porter’s Five Forces analysis, and competitive landscape. This study is a helpful source of information for market players, investors, VPs, stakeholders, and new entrants to gain thorough understanding of the industry and determine steps to be taken to gain competitive advantage.

The customer relationship management market growth is driven by factors such as increasing focus on customer engagement for long time and increasing use of customer relation management software in small and medium scale enterprises globally. Moreover the worldwide acceleration of digital transformation in enterprises due to COVID-19 outbreak boosts the growth of market. Increasing adoption of bringing your own device (BYOD) ecosystem due to surge in use of smartphone as well as high operational efficiency and less operational cost of the CRM software will create lucrative opportunity in the CRM software market during the forecast period.

Request Sample Report (Get Full Insights in PDF – 334 Pages) at: https://www.alliedmarketresearch.com/request-sample/628

The CRM software market size was valued at USD 41.93 billion in 2019, and is projected to reach USD 96.39 billion by 2027, growing at a CAGR of 11.1% from 2020 to 2027.

Customer relationship management market is segmented into component, deployment mode, organizational size, application, industry vertical, and region. By component, it is bifurcated into software and service. Depending on deployment mode, it is categorized into on-premise, cloud, and hybrid. On the basis of organization size, it is categorized into large scale and small and medium size enterprises. As per industry vertical, it is classified into BFSI, healthcare, energy & utility, it & telecommunication, retail & e-commerce, manufacturing, government & defense and others. Region wise, the CRM software market is analyzed across North America, Europe, Asia-Pacific, and LAMEA.

If you have any questions, Please feel free to contact our analyst at: https://www.alliedmarketresearch.com/connect-to-analyst/628

By component, the software segment accounted for the highest market share in 2019 and is set to dominate the market in the analysis period. On the other hand, the service segment is expected to have the highest CAGR of 12.6% during the 2020-2027 period.

By deployment model, the cloud segment generated the highest market share in 2019 and is predicted to continue to its great run during the forecast period. The same segment is also anticipated to have the highest CAGR of 11.8% during the analysis timeframe.

By application, the customer service segment generated the maximum revenue in 2019 and is predicted to maintain its top position during the forecast period. On the other hand, the CRM analytics segment is estimated to have the highest CAGR of 15.5% in the 2020-2027 period.

Enquiry Before Buying: https://www.alliedmarketresearch.com/purchase-enquiry/628

By region, the North America region held the highest market share in 2019 and is expected to top the charts in the analysis period. On the other hand, the Asia-pacific region is expected to be the fastest growing with a CAGR of 13.8% in the analysis period.

The report has also analyzed the major companies in the market, including MICROSOFT CORPORATION, AUREA SOFTWARE INC., SUGARCRM, INSIGHTLY, INC., ZOHO CORPORATION PVT. LTD., PEGASYSTEMS, SALESFORCE.COM, INC., SAGE GROUP, SAP SE, and ORACLE CORPORATION.

Buy Now & Get Exclusive Discount on this Report (334 Pages PDF with Insights, Charts, Tables, and Figures) at: https://www.alliedmarketresearch.com/crm-software-market/purchase-options

COVID-19 Scenario:

► The COVID-19 pandemic had a significant impact on businesses all over the world. Due to disruptions in production units, supply chains, labor and personnel availability, and the temporary closing of cross-country borders. As a result, businesses adopted policies allowing employees to work from home. However, companies have noticed a growing demand for customer support techniques to enable smooth communication between employees and customers. Intelligent cloud-based CRM would provide consolidated and analyzed data from a variety of sources inside and outside the databases by automating these solutions, providing decision-makers with useful insights.

► Due to the above-mentioned factors, customer relationship management adoption will reach its peak in the coming decades, opening significant opportunities for both established companies and start-ups.

Access the full summary at: https://www.alliedmarketresearch.com/crm-software-market

Thanks for reading this article, you can also get an individual chapter-wise section or region-wise report versions like North America, Europe, or Asia.

If you have any special requirements, please let us know and we will offer you the report as per your requirements.

Lastly, this report provides market intelligence most comprehensively. The report structure has been kept such that it offers maximum business value. It provides critical insights into the market dynamics and will enable strategic decision-making for the existing market players as well as those willing to enter the market.

Contact:David Correa1209 Orange Street,Corporation Trust Center,Wilmington, New Castle,Delaware 19801 USA.Int’l: +1-503-894-6022Toll Free: +1-800-792-5285UK: +44-845-528-1300India (Pune): +91-20-66346060Fax: +1-800-792-5285help@alliedmarketresearch.com

About Us:

Allied Market Research (AMR) is a market research and business-consulting firm of Allied Analytics LLP, based in Portland, Oregon. AMR offers market research reports, business solutions, consulting services, and insights on markets across 11 industry verticals. Adopting extensive research methodologies, AMR is instrumental in helping its clients to make strategic business decisions and achieve sustainable growth in their market domains. We are equipped with skilled analysts and experts and have a wide experience of working with many Fortune 500 companies and small & medium enterprises.

This release was published on openPR.

About Web3Wire Web3Wire – Information, news, press releases, events and research articles about Web3, Metaverse, Blockchain, Artificial Intelligence, Cryptocurrencies, Decentralized Finance, NFTs and Gaming. Visit Web3Wire for Web3 News and Events, Block3Wire for the latest Blockchain news and Meta3Wire to stay updated with Metaverse News.



Source link

The AI Compute Crunch: Why Efficient Infrastructure, Not Just More GPU

0
The AI Compute Crunch: Why Efficient Infrastructure, Not Just More GPU


The current artificial intelligence boom captures headlines with exponential model scaling, multi-modal reasoning, and breakthroughs involving trillion-parameter models. This rapid progress, however, hinges on a less glamorous but equally crucial factor: access to affordable computing power. Behind the algorithmic advancements, a fundamental challenge shapes AI’s future – the availability of Graphics Processing Units (GPUs), the specialized hardware essential for training and running complex AI models. The very innovation driving the AI revolution simultaneously fuels an explosive, almost insatiable demand for these compute resources.

This demand collides with a significant supply constraint. The global shortage of advanced GPUs is not merely a temporary disruption in the supply chain; it represents a deeper, structural limitation. The capacity to produce and deploy these high-performance chips struggles to keep pace with the exponential growth in AI’s computational needs. Nvidia, a leading provider, sees its most advanced GPUs backlogged for months, sometimes even years. Compute queue lengths are lengthening across cloud platforms and research institutions. This mismatch isn’t a fleeting issue; it reflects a fundamental imbalance between how compute is supplied and how AI consumes it.

The scale of this demand is staggering. Nvidia’s CEO, Jensen Huang, recently projected that AI infrastructure spending will triple by 2028, reaching $1 trillion. He also anticipates compute demand increasing 100-fold. These figures are not aspirational targets but reflections of intense, existing market pressure. They signal that the need for compute power is growing far faster than traditional supply mechanisms can handle.

As a result, developers and organizations across various industries encounter the same critical bottleneck: insufficient access to GPUs, inadequate capacity even when access is granted, and prohibitively high costs. This structural constraint ripples outwards, impacting innovation, deployment timelines, and the economic feasibility of AI projects. The problem isn’t just a lack of chips; it’s that the entire system for accessing and utilizing high-performance compute struggles under the weight of AI’s demands, suggesting that simply producing more GPUs within the existing framework may not be enough. A fundamental rethink of compute delivery and economics appears necessary.

Why Traditional Cloud Models Fall Short for Modern AI

Faced with compute scarcity, the seemingly obvious solution for many organizations building AI products is to “rent more GPUs from the cloud.” Cloud platforms offer flexibility in theory, providing access to vast resources without upfront hardware investment. However, this approach often proves inadequate for AI development and deployment demands. Users frequently grapple with unpredictable pricing, where costs can surge unexpectedly based on demand or provider policies. They may also pay for underutilized capacity, reserving expensive GPUs ‘just in case’ to guarantee availability, leading to significant waste. Furthermore, long provisioning delays, especially during periods of peak demand or when transitioning to newer hardware generations, can stall critical projects.

The underlying GPU supply crunch fundamentally alters the economics of cloud compute. High-performance GPU resources are increasingly priced based on their scarcity rather than purely on their operational cost or utility value. This scarcity premium arises directly from the structural shortage meeting major cloud providers’ relatively inflexible, centralized supply models. These providers, needing to recoup massive investments in data centers and hardware, often pass scarcity costs onto users through static or complex pricing tiers, amplifying the economic pain rather than alleviating it.

This scarcity-driven pricing creates predictable and damaging consequences across the AI ecosystem. AI startups, often operating on tight budgets, struggle to afford the extensive compute required for training sophisticated models or keeping them running reliably in production. The high cost can stifle innovation before promising ideas even reach maturity. Larger enterprises, while better able to absorb costs, frequently resort to overprovisioning – reserving far more GPU capacity than they consistently need – to ensure access during critical periods. This guarantees availability but often results in expensive hardware sitting idle. Critically, the cost per inference – the compute expense incurred each time an AI model generates a response or performs a task – becomes volatile and unpredictable. This undermines the financial viability of business models built on technologies like Large Language Models (LLMs), Retrieval-Augmented Generation (RAG) systems, and autonomous AI agents, where operational cost is paramount.

The traditional cloud infrastructure model itself contributes to these challenges. Building and maintaining massive, centralized GPU clusters demands enormous capital expenditure. Integrating the latest GPU hardware into these large-scale operations is often slow, lagging behind market availability. Furthermore, pricing models tend to be relatively static, failing to effectively reflect real-time utilization or demand fluctuations. This centralized, high-overhead, slow-moving approach represents an inherently expensive and inflexible way to scale compute resources in a world characterized by AI’s dynamic workloads and unpredictable demand patterns. The structure optimized for general-purpose cloud computing struggles to meet the AI era’s specialized, rapidly evolving, and cost-sensitive needs.

The Pivot Point: Cost Efficiency Becomes AI’s Defining Metric

The AI industry is navigating a crucial transition, moving from what could be called the “imagination phase” into the “unit economics phase.” In the early stages of this technological shift, demonstrating raw performance and groundbreaking capabilities was the primary focus. The key question was “Can we build this?” Now, as AI adoption scales and these technologies move from research labs into real-world products and services, the economic profile of the underlying infrastructure becomes the central constraint and a critical differentiator. The focus shifts decisively to “Can we afford to run this at scale, sustainably?”

Emerging AI workloads demand more than just powerful hardware; they require compute infrastructure that is predictable in cost, elastic in supply (scaling up and down easily with demand), and closely aligned with the economic value of the products they power. Financial sustainability is no longer a secondary concern but a primary driver of infrastructure choices and, ultimately, business success. Many of the most promising and potentially transformative AI applications are also the most resource-intensive, making efficient infrastructure absolutely critical for their viability:

Autonomous Agents and Planning Systems: These AI systems do more than just answer questions; they perform actions, iterate on tasks, and reason over multiple steps to achieve goals. This requires persistent, chained inference workloads that place heavy demands on both memory and compute. The cost per interaction naturally scales with the complexity of the task, making affordable, sustained compute essential. (In simple terms, AI that actively thinks and works over time needs a constant supply of affordable power).

Long-Context and Future Reasoning Models: Models designed to process vast amounts of information simultaneously (handling context windows exceeding 100,000 tokens) or simulate complex multi-step logic for planning purposes require continuous access to top-tier GPUs. Their compute costs rise substantially with the scale of the input or the complexity of the reasoning, and these costs are often difficult to reduce through simple optimization. (Essentially, AI analyzing large documents or planning complex sequences needs lots of powerful, sustained compute).

Retrieval-Augmented Generation (RAG): RAG systems form the backbone of many enterprise-grade AI applications, including internal knowledge assistants, customer support bots, and tools for legal or healthcare analysis. These systems constantly retrieve external information, embed it into a format the AI understands, and interpret it to generate relevant responses. This means compute consumption is ongoing during every user interaction, not just during the initial model training phase. (This means AI that looks up current information to answer questions needs efficient compute for every single query).

Real-Time Applications (Robotics, AR/VR, Edge AI): Systems that must react in milliseconds, such as robots navigating physical spaces, augmented reality overlays processing sensor data, or edge AI making rapid decisions, depend on GPUs delivering consistent, low-latency performance. These applications cannot tolerate delays caused by compute queues or unpredictable cost spikes that might force throttling. (AI needing instant reactions requires reliable, fast, and affordable compute).

For each of these advanced application categories, the factor determining practical viability shifts from solely model performance to the sustainability of the infrastructure economics. Deployment becomes feasible only if the cost of running the underlying compute makes business sense. In this context, access to cost-efficient, consumption-based GPU power ceases to be merely a convenience; it becomes a fundamental structural advantage, potentially gating which AI innovations successfully reach the market.

Spheron Network: Reimagining GPU Infrastructure for Efficiency

The clear limitations of traditional compute access models highlight the market’s need for an alternative: a system that delivers compute power like a utility. Such a model must align costs directly with actual usage, unlock the vast, latent supply of GPU power globally, and offer elastic, flexible access to the latest hardware without demanding restrictive long-term commitments. GPU-as-a-Service (GaaS) platforms, specifically designed around these principles, are emerging to fill this critical gap. Spheron Network, for instance, offers a capital-efficient, workload-responsive infrastructure engineered to scale with demand, not with complexity.

Spheron Network builds its decentralized GPU cloud infrastructure around a core principle: deliver compute efficiently and dynamically. In this model, pricing, availability, and performance respond directly to real-time network demand and supply, rather than being dictated by centralized providers’ high overheads and static structures. This approach aims to fundamentally realign supply and demand to support continuous AI innovation by addressing the economic bottlenecks hindering the industry.

Spheron Network’s model rests on several key pillars designed to overcome the inefficiencies of traditional systems:

Distributed Supply Aggregation: Instead of concentrating GPUs in a handful of massive, hyperscale data centers, Spheron Network connects and aggregates underutilized GPU capacity from a diverse, global network of providers. This network can include traditional data centers, independent crypto-mining operations with spare capacity, enterprises with unused hardware, and other sources. Creating this broader, more geographically dispersed, and flexible supply pool helps to flatten price spikes during peak demand and significantly improves resource availability across different regions.

Lower Operating Overhead: The traditional cloud model requires immense capital expenditures to build, maintain, secure, and power large data centers. By leveraging a distributed network and aggregating existing capacity, Spheron Network avoids much of this capital intensity, resulting in lower structural operating overheads. These savings can then be passed through to users, enabling AI teams to run demanding workloads at a potentially lower cost per GPU hour without compromising access to high-performance hardware like Nvidia’s latest offerings.

Faster Hardware Onboarding: Integrating new, more powerful GPU generations into the Spheron Network can happen much more rapidly than in centralized systems. Distributed providers across the network can acquire and bring new capacity online quickly as hardware becomes commercially available. This significantly reduces the typical lag between a new GPU generation’s launch and developers gaining access to it. It bypasses the lengthy corporate procurement cycles and integration testing common in large cloud environments and frees users from multi-year contracts that might lock them into older hardware.

The outcome of this decentralized, efficiency-focused approach is not just the potential for lower costs. It creates an infrastructure ecosystem that inherently adapts to fluctuating demand, improves the overall utilization of valuable GPU resources across the network, and delivers on the original promise of cloud computing: truly scalable, pay-as-you-go compute power, purpose-built for the unique and demanding nature of AI workloads.

To clarify the distinctions, the following table compares the traditional cloud model with Spheron Network’s decentralized pproach:

Feature

Traditional Cloud (Hyperscalers)

Spheron Network

Implications for AI Workloads

Supply Model

Centralized (few large data centers)

Distributed (global network of providers)

Spheron potentially offers better availability & resilience.

Capital Structure

High CapEx (massive data center builds)

Low CapEx (aggregates existing/new capacity)

Spheron can potentially offer lower baseline costs.

Operating Overhead

High (facility mgmt, energy, cooling at scale)

Lower (distributed model, less centralized burden)

Cost savings are potentially passed to users via Spheron.

Hardware Onboarding

Slower (centralized procurement, integration cycles)

Faster (distributed providers add capacity quickly)

Spheron offers quicker access to the latest GPUs.

Pricing Model

Often Static / Reserved Instances / Unpredictable Spot

Dynamic (reflects network supply/demand), Usage-Based

Spheron aims for more transparent, utility-like pricing.

Resource Utilization

Prone to Underutilization (due to overprovisioning)

Aims for Higher Utilization (matching supply/demand)

Spheron potentially reduces waste and improves overall efficiency.

Contract Lock-in

Often requires long-term commitments

Typically No Long-Term Lock-in

Spheron offers greater flexibility for developers.

Efficiency: The Sustainable Path to High Performance

A long-standing assumption within AI infrastructure circles has been that achieving better performance inevitably necessitates accepting higher costs. Faster chips and larger clusters naturally command premium prices. However, the current market reality – defined by persistent compute scarcity and demand that consistently outstrips supply – fundamentally challenges this trade-off. In this environment, efficiency transforms from a desirable attribute into the only sustainable pathway to achieving high performance at scale.

Therefore, efficiency is not the opposite of performance; it becomes a prerequisite for it. Simply having access to powerful GPUs is insufficient if that access is economically unsustainable or unreliable. AI developers and the businesses they support need assurance that their compute resources will remain affordable tomorrow, even as their workloads grow or market demand fluctuates. They require genuinely elastic infrastructure, allowing them to scale resources up and down easily without penalty. They need economic predictability to build viable business models, free from the threat of sudden, crippling cost spikes. And they need robustness – reliable access to the compute they depend on, resistant to the bottlenecks of centralized systems.

This is precisely why GPU-as-a-Service models gain traction, especially those, like Spheron Network’s, explicitly designed around maximizing resource utilization and controlling costs. These platforms shift the focus from merely providing more GPUs to enabling smarter, leaner, and more accessible use of the compute resources already available within the global network. By efficiently matching supply with demand and minimizing overhead, they make sustained access to high performance economically feasible for a broader range of users and applications.

Conclusion: Infrastructure Economics Will Crown AI’s Future Leaders

Looking ahead, the ideal state for infrastructure is to function as a transparent enabler of innovation. This utility powers progress without imposing itself as a cost ceiling or a logistical barrier. While the industry is not quite there yet, it stands near a significant turning point. As more AI workloads transition from experimental phases into full-scale production deployment, the critical questions defining success are shifting. The conversation moves beyond “How powerful is your AI model?” to encompass crucial operational realities: “What does it cost to serve a single user?” and “How reliably can your service scale when user demand surges?”

The answers to these questions about economic viability and operational scalability will increasingly determine who successfully builds and deploys the next generation of impactful AI applications. Companies unable to manage their compute costs effectively risk being priced out of the market, regardless of the sophistication of their algorithms. Conversely, those who leverage efficient infrastructure gain a decisive competitive advantage.

In this evolving landscape, the platforms that offer the best infrastructure economics – skillfully combining raw performance with accessibility, cost predictability, and operational flexibility – are poised to win. Success will depend not just on possessing the latest hardware, but on providing access to that hardware through a model that makes sustained AI innovation and deployment economically feasible. Solutions like Spheron Network, built from the ground up on principles of distributed efficiency, market-driven access, and lower overhead, are positioned to provide this crucial foundation, potentially defining the infrastructure layer upon which AI’s future will be built. The platforms with the best economics, not just the best hardware, will ultimately enable the next wave of AI leaders.



Source link

AWS outage exposes crypto industry’s vulnerability to centralized infrastructure

0
AWS outage exposes crypto industry’s vulnerability to centralized infrastructure



Amazon Web Services (AWS) experienced a temporary outage on April 15 that disrupted several major crypto platforms and reignited concerns over the industry’s dependence on centralized infrastructure.

On Social media platform X, Binance, the world’s leading crypto exchange by volume, revealed that it temporarily suspended withdrawals as a precaution after facing connectivity issues.

The exchange confirmed that some transaction orders failed due to the AWS disruption.

However, less than an hour later, Binance announced that services were recovering and withdrawals had resumed, although delays might persist during the full system restoration.

Another major crypto trading platform, KuCoin, reported disruptions caused by the AWS incident. The exchange assured users their funds and data remained safe while its technical team worked on a fix.

Other platforms, including crypto wallet Rabby and analytics provider DeBank, also posted service interruption notices.

The outage sparked renewed conversations around the need for decentralized backend systems.

Santeri Aramo, co-founder of Auki Network, called the disruption proof of centralized vulnerability. He said:

“This is exactly why we build decentralized infrastructure. No single point of failure. No gatekeeper. No lock on your funds. Own your keys. Own your future.”

Why AWS suffered an outage

The AWS disruption occurred between 12:40 A.M. and 1:43 A.M. PDT, affecting 15 different services.

Amazon explained that the incident was caused by power interruptions at both its primary and backup systems were responsible. While most services were restored quickly, its relational database service remained affected at the time of the update.

During the outage, users experienced delayed responses and failed connections tied to EC2 instances in the affected zone.

Meanwhile, AWS assured users the issue had been resolved, and no recurring problems were expected.

AWS currently holds a dominant share of the global cloud infrastructure market.

This incident highlighted the risk of centralizing critical operations under one service provider, a risk crypto platforms often aim to eliminate in their core mission.

Mentioned in this article



Source link

Hugging Face Doubles Down on Open-Source Innovation with French Robotics Powerhouse Pollen – Web3oclock

0
Hugging Face Doubles Down on Open-Source Innovation with French Robotics Powerhouse Pollen – Web3oclock


Why Pollen Robotics Is a Game-Changer?

Hugging Face’s Broader Robotics Vision

Hugging Face: A French-Born Global Leader

Why This Matters for France?

What’s Next?



Source link

Tariffs May Help Fund US Bitcoin Reserve Buildup Says White House Advisor Bo Hines – Decrypt

0
Tariffs May Help Fund US Bitcoin Reserve Buildup Says White House Advisor Bo Hines – Decrypt



The Trump administration’s sweeping tariffs, which have roiled global markets over the past few weeks, may just become instrumental to funding the U.S. Strategic Bitcoin Reserve without using taxpayer money.

While the extensive tariffs threatened and implemented over the past month have escalated and jolted crypto markets as the Trump administration pursues an “America First” trade policy, a key White House advisor thinks the revenues from it could be used to add to the country’s Bitcoin stash.

Bo Hines, executive director of the Presidential Council of Advisers on Digital Assets, said in a White House interview with Professional Capital Management’s Anthony Pompliano that the Trump administration is exploring several “budget-neutral” methods to get more Bitcoin.

“We’re looking at many creative ways, whether it be from tariffs or something else,” Hines said.

Hines’ ideas come after President Donald Trump signed an executive order that established the country’s creation of the Strategic Bitcoin Reserve last month. Data from Arkham tracking the U.S. stash shows it’s currently at 192,012 BTC.

Following Trump’s executive order, a separate document circulated from the Federal Register detailing a presidential directive requiring federal agencies to disclose all Bitcoin and digital asset holdings to the Treasury Secretary. That order’s deadline was last Saturday.

Hines adds that there is a “180-day landmark that’s on the horizon” as the federal agencies go through recommendations for acquiring more Bitcoin. “We’ll comb through all the reports and then we’ll produce a comprehensive piece of work,” Hines said.

Aside from the “creative” strategy of using tariff revenues for buying Bitcoin, Hines cites Senator Cynthia Lummis’s Bitcoin Act of 2025, which would revalue Treasury gold certificates from their outdated valuation of approximately $43 per ounce to reflect current market prices exceeding $3,000 per ounce.

Such an adjustment could free billions in value for Bitcoin acquisition without requiring congressional appropriations.

Treasury Secretary Besson and Commerce Secretary Lutnick join “many great actors” working through an inter-agency digital assets working group to develop acquisition strategies aligned with the administration’s goal of making the U.S. “the Bitcoin superpower of the globe,” Hines said.

“We’ll come together and flesh out some of these ideas and really get to the best solution,” he added.

Edited by Sebastian Sinclair

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.



Source link

The Power of Information Theory in Trading: Beyond Shannon’s Entropy

0
The Power of Information Theory in Trading: Beyond Shannon’s Entropy


Traders often find themselves relentlessly pursuing the perfect algorithm or the cutting-edge machine learning model that will give them the edge over competitors. However, as the brilliant mathematician Claude Shannon—rightfully called the “father of information theory” and arguably one of the greatest minds of the 20th century—demonstrated through his groundbreaking work, the fundamental question isn’t which sophisticated model to implement, but rather understanding the inherent predictability of the variables we’re attempting to forecast.

The Misguided Focus of Novice Quantitative Traders

When entering the world of algorithmic trading, many beginners immediately gravitate toward technical implementation questions:

“Should I use Long Short-Term Memory (LSTM) networks or reinforcement learning?”

“Is XGBoost superior to deep neural networks for market prediction?”

“Which programming language and library combination will yield the most efficient algorithm—Python with TensorFlow or PyTorch?”

While these are legitimate technical considerations that eventually need addressing, they fundamentally miss the crucial first question that should precede any model development: Is what we are trying to predict predictable in the first place?

This oversight represents a profound misunderstanding of what creates sustainable trading advantages. In today’s information-rich environment, algorithmic implementations have become largely commoditized—readily available through countless online tutorials, open-source libraries, and even AI assistants capable of generating sophisticated code in seconds. The marginal performance gain from selecting one well-implemented algorithm over another pales in comparison to the advantage gained from correctly identifying which market variables contain predictable information.

Shannon’s Entropy: The Mathematical Framework for Uncertainty

Claude Shannon’s revolutionary concept of entropy, introduced in his 1948 paper “A Mathematical Theory of Communication,” provides a precise mathematical framework for quantifying uncertainty in a system. Though originally developed for communication systems, entropy’s applications extend remarkably well to financial markets.

The Mathematics Behind Entropy

In information theory, entropy measures the average level of “surprise” or uncertainty inherent in a variable’s possible outcomes. Mathematically, Shannon entropy is defined as:

H(X) = -Σ p(x) log₂ p(x)

Where:

H(X) represents the entropy of random variable X

p(x) is the probability of a specific outcome x

The summation is taken over all possible values of X

For traders, this equation provides a quantitative measure of predictability. High entropy means high uncertainty with many possible outcomes that occur with similar probabilities—a state where prediction becomes exceedingly difficult. Low entropy indicates greater predictability, with certain outcomes being significantly more likely than others.

Applied to Markets

Consider two different trading scenarios:

High-Entropy Environment: Minute-by-minute price movements of a highly liquid cryptocurrency during a volatile news cycle. Each price tick could move in either direction with nearly equal probability, creating a state of maximum entropy.

Lower-Entropy Environment: Mean reversion opportunities in an overextended stock that historically returns to its 50-day moving average after deviating by more than three standard deviations. This pattern creates a lower-entropy situation where predictions become more reliable.

The quantitative trader who understands entropy will focus efforts on identifying and exploiting lower-entropy situations rather than attempting to predict essentially random movements, regardless of how sophisticated their modeling approach might be.

The Deceptive Nature of Randomness in Backtesting

One of the most sobering realities for quantitative traders is understanding how completely random strategies can produce dramatically different performance trajectories purely by chance. This phenomenon directly relates to Shannon’s work on information and randomness.

The Random Strategy Experiment

Consider three hypothetical trading strategies, each making completely random trade decisions with a 50% probability of winning or losing on each trade:

Strategy A: After 365 trading days, risking 1% of capital per trade, this strategy loses nearly 50% of its initial capital.

Strategy B: Using identical parameters, this strategy ends the year almost exactly where it started.

Strategy C: Despite following the same random process, this strategy generates an impressive 30% annual return.

This variance occurs despite all three strategies having identical underlying mechanics—purely random decisions with no edge whatsoever. The implications are profound: a profitable backtest does not necessarily indicate a sound strategy. It might simply reflect good luck in what is essentially a coin-flipping exercise.

Statistical Significance and Sample Size

This randomness problem highlights why statistical significance testing is crucial in strategy development. For a strategy with a small edge (say, 52% win rate), you might need thousands of trades before you can confidently distinguish skill from luck. Shannon’s information theory helps quantify exactly how many observations are needed based on the entropy of your system.

Practical Applications of Information Theory in Trading

How can traders apply information theory concepts to develop more robust strategies? Here are expanded practical approaches:

1. Focus on Entropy Reduction Through Feature Engineering

Rather than attempting to predict high-entropy variables directly, look for ways to transform your data to reduce entropy:

Market Regime Identification: Markets often exhibit different behavioral regimes (trending, range-bound, volatile, etc.) with varying entropy characteristics. First, you can apply specialized models appropriate to each context by identifying the current regime.

Conditional Probability Analysis: Instead of predicting price movements in isolation, condition your analysis on specific market states: “What is the probability of a positive return when the RSI is below 30 AND volume is above the 20-day average AND the sector ETF is showing relative strength?”

Time-Scale Transformation: Some market phenomena that appear random at one time scale may show structure at another. For example, 5-minute returns might be nearly random (high entropy), while daily returns of the same instrument exhibit momentum or mean-reversion patterns (lower entropy).

Cross-Asset Information: Incorporating information from related assets might reduce the entropy of one asset’s price movements. For instance, movements in the VIX might provide information that reduces the entropy of S&P 500 futures predictions.

2. Kelly Criterion: Information Theory’s Direct Application to Position Sizing

John Kelly Jr., while working at Bell Labs with Shannon, developed what became known as the Kelly Criterion—a mathematical framework for optimal position sizing based on your edge and confidence. This formula is directly derived from information theory principles:

Kelly Fraction = p – (1-p)/r

Where:

This approach ensures you maximize long-term growth while minimizing risk of ruin, providing a mathematically optimal solution to the bet-sizing problem.

Example Application: If your strategy has a 60% win rate with an average profit/loss ratio of 1:1, the Kelly Criterion suggests betting 20% of your bankroll on each trade (0.6 – (1-0.6)/1 = 0.2). However, most practitioners use a fractional Kelly approach (typically 25-50% of the full Kelly bet) to account for estimation errors.

3. Information Efficiency and Edge Decay

Shannon’s work helps us understand that markets continuously absorb and reflect information—a concept related to the Efficient Market Hypothesis. This creates a phenomenon where trading edges tend to decay over time as more participants discover and exploit them.

Measuring Edge Decay: Information theory provides tools to quantify how quickly a predictive signal loses its value. By measuring the mutual information between your signal and future returns across different time periods, you can determine the optimal holding period for your strategy.

Adaptation Mechanisms: Design systems that can detect edge decay through entropy measurements and adapt automatically, either by adjusting parameters or switching to alternative strategies when information content diminishes.

4. Entropy-Based Portfolio Construction

Beyond individual trading signals, information theory can guide portfolio construction:

Diversity Through Entropy Maximization: Construct portfolios by maximizing the entropy of return sources rather than traditional diversification metrics. This approach ensures you’re exposed to genuinely different return streams rather than illusory diversification.

Information-Weighted Allocation: Allocate capital not just based on expected returns, but on the information content of different strategies. Strategies operating in lower-entropy environments might deserve higher allocations despite seemingly similar backtested returns.

Beyond Shannon: Complementary Theoretical Frameworks

While Shannon’s work provides the foundation, several other theoretical frameworks complement information theory for traders:

Bayesian Inference: Updating Beliefs in Dynamic Markets

Bayesian statistics provides a rigorous framework for updating beliefs as new information arrives—perfectly suited for trading environments where conditions constantly evolve. Unlike traditional frequentist statistics, Bayesian methods incorporate prior knowledge and update probabilities continuously.

Practical Implementation:

Start with prior probability distributions about market behavior

Update these distributions as new data arrives using Bayes’ theorem

Make decisions based on the full posterior distribution, not just point estimates

Example: A Bayesian trend-following system might start with a prior belief about market direction, continuously update this belief as new price information arrives, and size positions proportionally to the probability mass supporting the trend.

Non-Linear Dynamics and Chaos Theory

Financial markets exhibit many characteristics of complex, non-linear systems—sometimes operating near the “edge of chaos” where they are neither completely random nor perfectly predictable.

Lyapunov Exponents: These mathematical tools from chaos theory measure how quickly nearby states in a system diverge over time. In trading terms, they help quantify how long predictions remain valid before uncertainty overwhelms the signal.

Phase Space Reconstruction: Techniques from dynamical systems theory can reconstruct the underlying dynamics of a market from time series data, potentially revealing structure in what appears to be random price movements.

Recurrence Analysis: By identifying when a market revisits similar states, recurrence plots and quantification tools can reveal hidden patterns that statistical approaches might miss.

Ergodic Theory: Path Dependence and Sequence Risk

Ergodicity examines whether time averages equal ensemble averages—a concept particularly relevant to trading where the specific sequence of returns matters tremendously.

Non-Ergodic Properties of Markets: Many market phenomena are non-ergodic, meaning individual paths matter enormously. A strategy that works “on average” may still lead to ruin if it experiences losses in an unfortunate sequence.

Kelly-Optimal Betting in Non-Ergodic Settings: Shannon’s colleague and collaborator, John Kelly Jr., developed the Kelly criterion specifically to address optimal betting in non-ergodic settings—maximizing the geometric growth rate rather than arithmetic returns.

Sequence Risk Mitigation: Techniques like dynamic position sizing, drawdown controls, and time-varying exposure help manage the non-ergodic nature of markets.

Complexity Theory and Fractals in Financial Markets

Financial markets display many characteristics of complex adaptive systems, including:

Self-Organization: Markets spontaneously organize into patterns without external direction.

Emergence: The collective behavior of market participants creates phenomena that cannot be predicted from individual actions alone.

Power-Law Distributions: Returns often follow “fat-tailed” distributions rather than standard curves, leading to more frequent extreme events than standard models predict.

Fractal Patterns: As identified by Benoit Mandelbrot, market price movements often follow self-similar patterns that repeat across different time scales. Properly designed trading systems can exploit this fractal geometry.

Adaptive Behavior: Markets adapt to new information and strategies, creating a constant co-evolutionary process between different trading approaches.

Comprehensive Implementation Framework

To apply these theoretical concepts to practical trading, follow this expanded implementation framework:

1. Entropy Measurement and Signal Selection

Before building any predictive model, quantify the entropy of potential trading signals under different conditions:

Calculate Shannon entropy for various indicators, features, and market states

Identify conditions where entropy temporarily decreases, creating prediction opportunities

Rank potential signals by their information content, focusing on those with consistently lower entropy

Tools: Information gain calculations, conditional entropy measures, and mutual information metrics.

2. Signal Processing and Feature Engineering

Transform raw market data into features with improved predictive power:

Apply wavelet transforms to separate noise from signal across multiple time scales

Use information-theoretic feature selection methods to identify the most informative variables

Implement non-linear transformations that capture complex relationships

Example: Rather than using raw price data, transform it into relative strength metrics, statistical moments, or regime-specific indicators that have lower entropy in specific contexts.

3. Model Selection Based on Data Characteristics

Match your modeling approach to the entropy characteristics of your target:

For lower-entropy, more structured phenomena: parametric models, regression, or rule-based systems

For medium-entropy phenomena with complex patterns: machine learning approaches like gradient boosting or neural networks

For high-entropy phenomena with subtle dependencies: ensemble methods that combine multiple weak signals

4. Information-Theoretic Position Sizing

Implement sophisticated position sizing based on information theory principles:

Use Kelly criterion as a baseline for optimal position sizing

Adjust position sizes dynamically based on the current entropy of the market

Implement fractional Kelly approaches to account for uncertainty in probability estimates

Create meta-models that adjust exposure based on how well your model is capturing current market information

5. Robust Testing Against Randomness

Develop testing methodologies that distinguish genuine edges from statistical flukes:

Compare strategy performance against ensembles of random strategies with similar trade frequencies

Implement Monte Carlo simulations to understand the range of possible outcomes

Calculate the minimum sample size needed to establish statistical significance based on your edge size

Test for robustness across different market regimes and entropy conditions

6. Continuous Entropy Monitoring

Build systems that continuously monitor the information content of your signals:

Track how the entropy of your target variables changes over time

Detect when markets shift to higher-entropy states where prediction becomes more difficult

Adjust exposure automatically when your information edge weakens

Implement circuit breakers that reduce position sizes when entropy spikes

Case Studies: Information Theory in Action

Case Study 1: Mean Reversion in Low-Entropy Regimes

A quantitative hedge fund discovered that certain market sectors exhibited temporarily low entropy following specific types of news events. By measuring the conditional entropy of price movements after these events, they identified predictable mean-reversion patterns that occurred only when specific conditions were met.

Their approach:

Continuously measure entropy across multiple market sectors

Identify temporary low-entropy windows following specific trigger events

Apply mean-reversion models only during these windows

Size positions according to the measured reduction in entropy

Exit positions when entropy returns to normal levels

This strategy generated consistent alpha by focusing exclusively on moments when genuine predictability emerged in otherwise noisy markets.

Case Study 2: Information Flow Between Markets

A systematic macro fund applied information theory to measure information flow between related markets. By calculating the transfer entropy between currencies, interest rates, and commodity prices, they identified lead-lag relationships that weren’t apparent from conventional correlation analysis.

Their findings revealed that certain markets acted as information sources for others, with predictable time delays in how information propagated through the financial system. By placing trades in the “receiver” markets based on movements in the “source” markets, they exploited these information asymmetries before they became widely recognized.

Conclusion: The Information-Theoretic Trader

While advanced algorithms and sophisticated coding skills remain essential tools for quantitative traders, the real edge comes from understanding the fundamental nature of what you’re trying to predict. Shannon’s entropy concept provides a robust framework for this understanding, transforming how we approach market prediction.

The truly successful quantitative traders aren’t necessarily those with the most sophisticated models or fastest execution systems, but those with a deep understanding of where and when predictability emerges in markets. They know how to:

Identify the least random, most predictable aspects of market behavior

Recognize when markets shift between high and low entropy states

Adjust their strategies and exposure accordingly

Size positions based on the quality of information available

Perhaps most importantly, they respect the limits of predictability. They don’t fight against randomness—they work with it, measuring it precisely and betting accordingly. They understand that in many cases, knowing what you cannot predict is just as valuable as knowing what you can.

Before choosing an algorithm, consider whether the prediction has a low enough entropy to be predictable. As Shannon’s work demonstrates, in trading and information theory, understanding the limits of predictability is often more valuable than the prediction itself.



Source link

Popular Posts

My Favorites

Creating Beauty with Algorithms: A Guide to Generative Art Tools

Generative art is an innovative movement that harnesses the power of algorithms to create aesthetically pleasing visuals. By blending art and technology, creators push...