Web3

Home Web3 Page 19

Riot Taps Advisors to Explore AI Partnerships as Bitcoin Miners Eye New Revenue Streams – Decrypt

0
Riot Taps Advisors to Explore AI Partnerships as Bitcoin Miners Eye New Revenue Streams – Decrypt



On Wednesday, bitcoin mining company Riot Platforms said it is exploring partnerships in the artificial intelligence and high-performance computing sector as it aims to shore up its business and generate sustainable revenue streams.

The NASDAQ-listed company said it would ramp up evaluations for potential AI and high-performance computing (HPC) uses at its Corsicana Facility in Navarro County, Texas, citing increased interest from multiple potential partners.

Riot’s exploration of AI computing capabilities reflects a growing trend among Bitcoin miners to leverage their substantial power infrastructure and data center expertise for additional revenue opportunities beyond crypto mining.

The move comes as mining difficulty on the Bitcoin network has reached a historic high, peaking at 114.7 terahashes when it arrived at block height 883,502 on February 10, data from CoinWarz shows.

Meanwhile, revenue from Bitcoin mining hardware has significantly dropped over the year, to as low as $10.4 a day over an operating margin of 60% for an average ASIC unit like the Antminer S21+ Hydro,  according to data from Hashrate Index.

Alongside its AI explorations, Riot appointed three new directors with relevant expertise:  Hut 8 Mining CEO Jaime Leverton, former Meta senior engineer Doug Mouton, and real estate investment veteran Michael Turner.

Moving in to explore AI and high-performance computing is part of Riot’s initiatives to “maximize value” for its “entire portfolio of assets,” Riot CEO Jason Les said in a statement to Decrypt.

Similar strategic shifts by other major crypto mining operators are at play. Leverton, who just joined Riot’s board, previously led her company’s expansion into HPC by acquiring TeraGo’s data center business.

Companies such as Hut 8 and Core Scientific are repurposing their infrastructure for AI workloads, leveraging existing power access and data centers. 

These diversification moves are also aimed at reducing dependence on Bitcoin’s price fluctuations while capitalizing on the growing demand for AI computing resources.

However, the company cautioned there’s no guarantee its assets are suitable for AI/HPC conversion or that partnerships can be negotiated on favorable terms.

Still, Bitcoin mining and other public crypto firms are beating the market, with their overall market cap expanding by 14% to bring their valuations to $108 billion, according to JPMorgan.

Riot also operates Bitcoin mining facilities in Rockdale, Texas, and Kentucky, along with electrical switchgear engineering operations in Colorado. 

The company’s stock, which trades on the NASDAQ under the ticker RIOT, is up 0.2% on the day to $11.16, Google Finance data shows.

Edited by Sebastian Sinclair

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.



Source link

BlockDAG’s $200M Presale Marks a New Era in Blockchain, Challenging HBAR and TON – Web3oclock

0
BlockDAG’s 0M Presale Marks a New Era in Blockchain, Challenging HBAR and TON – Web3oclock


BlockDAG’s $200M Presale Surge: What’s Fueling the Hype?

HBAR and TON: Market Giants Facing Volatility

Can BlockDAG’s Innovation Give It the Competitive Edge?

The Road Ahead: Will BlockDAG Surpass HBAR & TON?



Source link

Ultimate Comparison of DeepSeek Models: V3, R1, and R1-Zero

0
Ultimate Comparison of DeepSeek Models: V3, R1, and R1-Zero


DeepSeek has gained recognition in the AI community with its latest models, DeepSeek R1, DeepSeek V3, and DeepSeek R1-Zero. Each model offers unique capabilities and is designed to address different AI applications. DeepSeek R1 specializes in advanced reasoning tasks, employing reinforcement learning to improve logical problem-solving skills. Meanwhile, DeepSeek V3 is a scalable natural language processing (NLP) model, leveraging a Mixture-of-Experts (MoE) architecture to manage diverse tasks efficiently. On the other hand, DeepSeek R1-Zero takes a novel approach by relying entirely on reinforcement learning without supervised fine-tuning.

This guide provides a detailed comparison of these models, exploring their architectures, training methodologies, performance benchmarks, and practical implementations.

DeepSeek Models Overview

1. DeepSeek R1: Optimized for Advanced Reasoning

DeepSeek R1 integrates reinforcement learning techniques to handle complex reasoning. The model stands out in logical deduction, problem-solving, and structured reasoning tasks.

Real-World Example

Input: “In a family tree, if Mark is the father of Alice and Alice is the mother of Sam, what is Mark’s relation to Sam?”

Expected Output: “Mark is Sam’s grandfather.”

DeepSeek R1 efficiently processes logical structures, ensuring its responses are both coherent and accurate.

2. DeepSeek V3: General-Purpose NLP Model

DeepSeek V3, a versatile NLP model, operates using a Mixture-of-Experts (MoE) architecture. This approach allows the model to scale effectively while handling various applications such as customer service automation, content generation, and multilingual processing.

Real-World Example

DeepSeek V3 ensures that responses remain concise, informative, and well-structured, making it ideal for broad NLP applications.

3. DeepSeek R1-Zero: Reinforcement Learning Without Supervised Fine-Tuning

DeepSeek R1-Zero takes a unique approach. It is trained exclusively through reinforcement learning without relying on traditional supervised fine-tuning. While this method results in strong reasoning capabilities, the model may occasionally generate outputs that lack fluency and coherence.

Real-World Example

Input: “Describe the process of volcanic eruption.”

Expected Output: “Volcanic eruptions occur when magma rises beneath the Earth’s crust due to intense heat and pressure. The magma reaches the surface through vents, causing an explosion of lava, ash, and gases.”

DeepSeek R1-Zero successfully conveys fundamental scientific concepts but sometimes lacks clarity or mixes language elements.

Model Architecture: How They Differ

1. DeepSeek V3’s Mixture-of-Experts (MoE) Architecture

The Mixture-of-Experts (MoE) architecture makes large language models (LLMs) more efficient by activating only a small portion of their parameters during inference. DeepSeek-V3 uses this approach to optimize both computing power and response time.

DeepSeek-V3 builds on DeepSeek-V2, incorporating Multi-Head Latent Attention (MLA) and DeepSeekMoE for faster inference and lower training costs. The model has 671 billion parameters, but it only activates 37 billion at a time. This selective activation reduces computing demands while maintaining strong performance.

MLA improves efficiency by compressing attention keys and values, lowering memory usage without sacrificing attention quality. Meanwhile, DeepSeek-V3’s routing system directs inputs to the most relevant experts for each task, preventing bottlenecks and improving scalability.

Unlike traditional MoE models that use auxiliary losses to balance expert usage, DeepSeek-V3 relies on dynamic bias adjustment. This method ensures experts are evenly utilized without reducing performance.

The model also features Multi-Token Prediction (MTP), allowing it to predict multiple tokens simultaneously. This improves training efficiency and enhances performance on complex tasks.

For example, if a user asks a coding-related question, DeepSeek-V3 activates experts specialized in programming while keeping others inactive. This targeted activation makes the model both powerful and resource-efficient.

2. Architectural Differences Between DeepSeek R1 and R1-Zero

DeepSeek R1 and DeepSeek R1-Zero benefit from the MoE framework but diverge in their implementation.

DeepSeek R1

Employs full MoE capabilities while dynamically activating experts based on query complexity.

Uses reinforcement learning (RL) and supervised fine-tuning for better readability and logical consistency.

Incorporates load balancing strategies to ensure no single expert becomes overwhelmed.

DeepSeek R1-Zero

Uses a similar MoE structure but prioritizes zero-shot generalization rather than fine-tuned task adaptation.

Operates solely through reinforcement learning, optimizing its ability to tackle unseen tasks.

Exhibits lower initial accuracy but improves over time through self-learning.

Training Methodology: How DeepSeek Models Learn

DeepSeek R1 and DeepSeek R1-Zero use advanced training methods to improve the learning of large language models (LLMs). Both models apply innovative techniques to boost reasoning skills, but they follow different training approaches.

1. DeepSeek R1: Hybrid Training Approach

DeepSeek R1 follows a multi-phase training process, combining reinforcement learning with supervised fine-tuning for maximum reasoning ability.

Training Phases:

Cold Start Phase: The model first fine-tunes on a small, high-quality dataset created from DeepSeek R1-Zero’s outputs. This step ensures clear and coherent responses from the start.

Reasoning Reinforcement Learning Phase: Large-scale RL improves the model’s reasoning skills across different tasks.

Rejection Sampling and Fine-Tuning Phase: The model generates multiple responses, keeps only the correct and readable ones, and then undergoes further fine-tuning.

Diverse Reinforcement Learning Phase: The model trains on a variety of tasks, using rule-based rewards for structured problems like math and LLM feedback for other areas.

2. DeepSeek R1-Zero: Pure Reinforcement Learning

DeepSeek R1-Zero relies entirely on reinforcement learning, eliminating the need for supervised training data.

Key Training Techniques:

Reinforcement Learning Only: It learns entirely through reinforcement learning, using a method called Group Relative Policy Optimization (GRPO), which simplifies the process by removing the need for critical networks.

Rule-Based Rewards: It follows predefined rules to calculate rewards based on accuracy and response format. This approach reduces resource use while still delivering strong performance on various benchmarks.

Exploration-Driven Sampling: It explores different learning paths to adapt to new scenarios, leading to improved reasoning skills.

Overview of Training Efficiency and Resource Requirements

DeepSeek R1

Resource Requirements: It needs more computing power because it follows a multi-phase training process, combining supervised and reinforcement learning (RL). This extra effort improves output readability and coherence.

Training Efficiency: Although it consumes more resources, its use of high-quality datasets in the early stages (cold-start phase) lays a strong foundation, making later RL training more effective.

DeepSeek R1-Zero

Resource Requirements: It uses a more cost-effective approach, relying only on reinforcement learning. It uses rule-based rewards instead of complex critic models, which significantly lowers computing costs.

Training Efficiency: Despite being more straightforward, it performs well on benchmarks, proving that models can be trained effectively without extensive supervised fine-tuning. Its exploration-driven sampling also improves adaptability while keeping costs low.

Performance Benchmarks: How They Compare

BenchmarkDeepSeek R1DeepSeek R1-Zero

AIME 2024 (Pass@1)79.8% (Surpasses OpenAI’s o1-1217)15.6% → 71.0% (After training)

MATH-50097.3% (Matches OpenAI models)95.9% (Close performance)

GPQA Diamond71.5%73.3%

CodeForces (Elo)2029 (Beats 96.3% of humans)Struggles in coding tasks

DeepSeek R1 excels in reasoning-intensive tasks, while R1-Zero improves over time but starts with lower accuracy.

How to Use DeepSeek Models with Hugging Face and APIs

You can run DeepSeek models (DeepSeek-V3, DeepSeek-R1, and DeepSeek-R1-Zero) using Hugging Face and API calls. Follow these steps to set up and run them.

1. Running DeepSeek-V3

Step 1: Clone the Repository

Run the following commands to download the DeepSeek-V3 repository and install the required dependencies:

git clone https://github.com/deepseek-ai/DeepSeek-V3.git
cd DeepSeek-V3/inference
pip install -r requirements.txt

Step 2: Download Model Weights

You can download the model weights from Hugging Face. Replace with DeepSeek-V3 or DeepSeek-V3-Base:

huggingface-cli repo download –revision main –local-dir /path/to/DeepSeek-V3

Move the downloaded weights to /path/to/DeepSeek-V3.

Step 3: Convert Model Weights

Run the following command to convert the model weights:

python convert.py –hf-ckpt-path /path/to/DeepSeek-V3 –save-path /path/to/DeepSeek-V3-Demo –n-experts 256 –model-parallel 16

Step 4: Run Inference

Use this command to interact with the model in real-time:

torchrun –nnodes 2 –nproc-per-node 8 generate.py –node-rank $RANK –master-addr $ADDR –ckpt-path /path/to/DeepSeek-V3-Demo –config configs/config_671B.json –interactive –temperature 0.7 –max-new-tokens 200

2. Running DeepSeek-R1

Step 1: Install and Run the Model

Install Ollama and run DeepSeek-R1:

ollama run deepseek-r1:14b

Step 2: Create a Python Script

Create a file called test.py and add the following code:

import ollama

model_name = ‘deepseek-r1:14b’
question = ‘How to solve a quadratic equation x^2 + 5*x + 6 = 0’

response = ollama.chat(model=model_name, messages=[
{‘role’: ‘user’, ‘content’: question},
])

answer = response[‘message’][‘content’]
print(answer)

with open(“OutputOllama.txt”, “w”, encoding=“utf-8”) as file:
file.write(answer)

Step 3: Run the Script

Ensure Ollama is installed, then run:

pip install ollama
python test.py

3. Running DeepSeek-R1-Zero

Step 1: Install Required Libraries

Install the OpenAI library to use the DeepSeek API:

pip install openai

Step 2: Create a Python Script

Create a file called deepseek_r1_zero.py and add the following code:

from openai import OpenAI

client = OpenAI(api_key=“”, base_url=“https://api.deepseek.com”)

messages = [{“role”: “user”, “content”: “What is the capital of France?”}]

response = client.chat.completions.create(
model=“deepseek-r1-zero”,
messages=messages
)

content = response.choices[0].message.content
print(“Answer:”, content)

messages.append({‘role’: ‘assistant’, ‘content’: content})
messages.append({‘role’: ‘user’, ‘content’: “Can you explain why?”})

response = client.chat.completions.create(
model=“deepseek-r1-zero”,
messages=messages
)

content = response.choices[0].message.content
print(“Explanation:”, content)

Step 3: Run the Script

Replace with your actual API key, then run:

python deepseek_r1_zero.py

You can easily set up and run DeepSeek models for different AI tasks!

Final Thoughts

DeepSeek’s latest models—V3, R1, and R1-Zero—bring significant advancements in AI reasoning, NLP, and reinforcement learning. DeepSeek R1 dominates structured reasoning tasks, V3 offers broad NLP capabilities, and R1-Zero showcases innovative self-learning potential.

With growing adoption, these models will shape AI applications across education, finance, healthcare, and legal tech.



Source link

In-Depth Analysis of the Music Production Software Market: Growth Opportunities, Key Trends, and Forecast 2025-2034 | Web3Wire

0
In-Depth Analysis of the Music Production Software Market: Growth Opportunities, Key Trends, and Forecast 2025-2034 | Web3Wire


Music Production Software Market Size

What Are the Projected Growth and Market Size Trends for the Music Production Software Market?In recent times, the market size of music production software has experienced a significant increase. The market, which was valued at $1.45 billion in 2024, is anticipated to reach $1.55 billion in 2025, with a compound annual growth rate (CAGR) of 6.5%. This growth in the previous period can be linked to factors such as the transition to cloud-based solutions, a boost in self-producing artists, the proliferation of mobile music production, the expansion of music streaming services, and higher penetration of mobile devices and the Internet among consumers.

Over the forthcoming years, we can anticipate a substantial escalation in the size of the music production software market, expected to reach a total value of $2 billion in 2029, expanding at a compound annual growth rate (CAGR) of 6.7%. Factors contributing to this growth within the forecast period include an increase in demand for digital audio content, a surge in the use of DJ software, the rise of paid streaming services, a broader need for music composition software and an increase in the number of musicians and artists. The future trends that are likely to shape this growth includes, technological innovations, the emergence of cloud-based music production systems, an expansion of subscription-based models, the adoption of virtual studio technology (VST), and the implementation of AI-powered features within the software.

What Factors Are Propelling the Expansion of the Music Production Software Market?The rapid increase in the need for digital audio material is predicted to drive the expansion of the music production software market. Digital audio content encompasses sound recordings, music or any other audio recordings that have been digitized and saved digitally. Several factors, such as ease of use, the widespread use of smart devices, and audiobooks, attribute to the demand for digital audio content. Music production software provides a variety of features and functionalities that let users create, record, adjust, blend, and master audio recordings, podcasts, soundtracks, along with other types of digital audio content. As an example, in June 2023, WordsRated, a non-profit organization based in the US, reported that 2022 audiobook sales in the United States accounted for more than $1.81 billion. These sales saw a 3.43% increase in comparison to 2021. Hence, the escalating demand for digital audio content is fueling the rise of the music production software market.

Get Your Free Sample Now – Explore Exclusive Market Insights:https://www.thebusinessresearchcompany.com/sample.aspx?id=15423&type=smp

Which Leading Companies Are Shaping the Growth of the Music Production Software Market?Major companies operating in the music production software market are Apple Inc., Adobe Inc., Yamaha Corporation, MAGIX Software GmbH, Ableton AG, Universal Audio, Steinberg Media Technologies GmbH, Image Line Software, Serato, Arturia SA, PreSonus Audio Electronics Inc., Reason Studios, Celemony Software GmbH, Waves Audio Ltd., Native Instruments USA Inc., Cakewalk Inc., NCH Software, Acoustica Inc., Bitwig GmbH, Cockos Incorporated, Tracktion Software Corporation, MOTU Inc., GoldWave Inc., Acon Digital AS, Zynewave

What Are the Major Trends Shaping the Music Production Software Market?Leading firms in the music production software market are working on developing innovative offerings, particularly focusing on AI-enabled music production software plugin tools to help musicians create new compositions. These are essentially digital apps or software plugins that utilize AI and machine learning to automate or enhance the different facets of music production. For example, DigiTraxAI, an American AI music company, in November 2023, introduced KR38R PRO, the premier AI music production software plugin tool. It’s compatible with major digital audio workstations (DAWs) and uses potent algorithms for creation, remixing, and altering the various aspects of music compositions allowing for efficient workflow. The tool features sliders for motion and flow for dynamic control of music along with a central dice button for randomization, thus facilitating limitless explorations in music creation. The launch of KR38R PRO represents the culmination of a seven-year endeavor of human-AI synergies with the central innovative element being music theory, that empowers composers and producers with infinite creative possibilities instantly.

What Are the Key Segments of the Music Production Software Market?The music production software market covered in this report is segmented –

1) By Type: Editing, Mixing, Recording2) By Deployment: On-Premise, Cloud-Based3) By Application: Artists, Musicians, Entertainment, Education4) By End-User: Professionals, Non-Professional

Subsegments:1) By Editing: Audio Editing Software, MIDI Editing Software, Beat Making Software, Audio Restoration And Repair Tools2) By Mixing: Digital Audio Workstations (DAWs) For Mixing, Virtual Audio Effects (VSTs), EQ And Compression Plugins, Reverb And Delay Plugins, Audio Mixing Consoles3) By Recording: Multi-Track Recording Software, Audio Interface Software, Vocoder And Auto-Tune Software, Recording And Editing Plugins, Live Performance Recording Software

Pre-Book Your Report Now For A Swift Delivery:https://www.thebusinessresearchcompany.com/report/music-production-software-global-market-report

Which Region Dominates the Music Production Software Market?North America was the largest region in the music production Software market in 2024. Asia-Pacific is expected to be the fastest-growing region in the forecast period. The regions covered in the music production software market report are Asia-Pacific, Western Europe, Eastern Europe, North America, South America, Middle East, Africa.

What Is Covered In The Music Production Software Global Market Report?

– Market Size Analysis: Analyze the Music Production Software Market size by key regions, countries, product types, and applications.– Market Segmentation Analysis: Identify various subsegments within the Music Production Software Market for effective categorization.– Key Player Focus: Focus on key players to define their market value, share, and competitive landscape.– Growth Trends Analysis: Examine individual growth trends and prospects in the Market.– Market Contribution: Evaluate contributions of different segments to the overall Music Production Software Market growth.– Growth Drivers: Detail key factors influencing market growth, including opportunities and drivers.– Industry Challenges: Analyze challenges and risks affecting the Music Production Software Market.– Competitive Developments: Analyze competitive developments, such as expansions, agreements, and new product launches in the market.

Unlock Exclusive Market Insights – Purchase Your Research Report Now!https://www.thebusinessresearchcompany.com/purchaseoptions.aspx?id=15423

Connect with us on:LinkedIn: https://in.linkedin.com/company/the-business-research-company,Twitter: https://twitter.com/tbrc_info,YouTube: https://www.youtube.com/channel/UC24_fI0rV8cR5DxlCpgmyFQ.

Contact UsEurope: +44 207 1930 708,Asia: +91 88972 63534,Americas: +1 315 623 0293 orEmail: mailto:info@tbrc.info

Learn More About The Business Research CompanyWith over 15,000+ reports from 27 industries covering 60+ geographies, The Business Research Company has built a reputation for offering comprehensive, data-rich research and insights. Our flagship product, the Global Market Model delivers comprehensive and updated forecasts to support informed decision-making.

This release was published on openPR.

About Web3Wire Web3Wire – Information, news, press releases, events and research articles about Web3, Metaverse, Blockchain, Artificial Intelligence, Cryptocurrencies, Decentralized Finance, NFTs and Gaming. Visit Web3Wire for Web3 News and Events, Block3Wire for the latest Blockchain news and Meta3Wire to stay updated with Metaverse News.





Source link

Judge Sides With Thomson Reuters in AI Copyright Dispute – Decrypt

0
Judge Sides With Thomson Reuters in AI Copyright Dispute – Decrypt



On Tuesday, a federal judge ruled in favor of Reuters’ parent company, Thomson Reuters, in their lawsuit against legal AI developer Ross Intelligence.

U.S. Circuit Judge Stephanos Bibas said he was revising his 2023 summary judgment opinion on the case, court documents show.

The ruling stems from a May 2020 lawsuit in which Thomson Reuters accused San Franciso-based Ross Intelligence of unlawfully copying content from its Westlaw platform to train its AI using data acquired from Michigan-based LegalEase Solutions.

Since the launch of ChatGPT in 2022, media outlets, artists, and authors have expressed concerns that their content was being used to train AI models.

Many, including Game of Thrones creator George RR Martin, John Grisham, and Michael Connelly, have sued developers, accusing them of using their work without permission or compensation. In December 2023, the New York Times sued OpenAI, alleging its articles were used to train ChatGPT.

“In my 2023 opinion, I denied summary judgment on fair use,” Judge Bibas wrote. “But with new information and understanding, I vacate those sections of that order and its accompanying opinion addressing fair use. Fair use is an affirmative defense, so Ross bears the burden of proof.”

Judge Bibas explained that after Ross was denied a license to use Westlaw content, it acquired training data from LegalEase—a research and writing service provider that offers outsourced legal support—which provided ‘Bulk Memos’ or collections of legal queries and responses.

“LegalEase sold Ross roughly 25,000 Bulk Memos, which Ross used to train its AI search tool,” Bibas wrote. “In other words, Ross built its competing product using Bulk Memos, which in turn were built from Westlaw headnotes. When Thomson Reuters found out, it sued Ross for copyright infringement.”

LegalEase, according to Judge Bibas, provided a guide explaining how to create the questions and answers using Westlaw headnotes. The guide instructed users not to copy and paste the headnotes directly.

“The parties agree that LegalEase had access to Westlaw and used it to make the Bulk Memos,” Judge Bibas wrote. “Of course, access alone is not proof. However, when a Bulk Memo question resembles a headnote more than the original judicial opinion, it strongly suggests actual copying.”

Judge Bibas found that Ross Intelligence infringed on 2,243 headnotes, with the only remaining factual question being whether some of their copyrights had expired. He also ruled that Ross Intelligence’s defenses—including innocent infringement, copyright misuse, merger, and scenes à faire—fail.

“Smart man knows when he is right; a wise man knows when he is wrong,” Judge Bibas wrote. “Wisdom does not always find me, so I try to embrace it when it does––even if it comes late, as it did here.”

Edited by Sebastian Sinclair

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

Harrison.ai Lands Massive $112M Series C to Disrupt Global Healthcare – Web3oclock

0
Harrison.ai Lands Massive 2M Series C to Disrupt Global Healthcare – Web3oclock


AI-Powered Tools for Faster, More Accurate Diagnoses:

Annalise.ai – Focused on radiology, assisting doctors in detecting diseases from imaging scans.

Franklin.ai – A pathology tool designed to help clinicians diagnose medical conditions more efficiently.

‘A Second Pair of Eyes’ for Clinicians:

Rapid Growth and Global Expansion:

Launched Franklin.ai, its pathology-focused AI solution.

Developed an AI-powered prostate biopsy tool, expected to launch in 2025.

Monetized Annalise.ai, which has tripled its annual recurring revenue for three consecutive years.

Expanded to 15 countries, including the U.K., U.S., Germany, Spain, UAE, and India.

Secured regulatory clearance in 40 countries, including 12 FDA approvals in the U.S.

Standing Out in a Competitive Market:

Its Chest X-ray AI has been cleared in 40 countries and can detect 124 findings—four times more than its closest competitors.

Its CT Brain AI can identify 130 findings, again 4x the industry standard.

A study conducted at Alfred Health in 2024 suggests Harrison’s AI can detect lung cancer 16 months earlier, potentially identifying 32% more cases ahead of time.

Pioneering AI in Medical Imaging:

What’s Next?



Source link

Cobalt Free Batteries Market Key Players Analysis – Conamix, AESC, BYD, CALB, CATL, Cobra. | Web3Wire

0
Cobalt Free Batteries Market Key Players Analysis – Conamix, AESC, BYD, CALB, CATL, Cobra. | Web3Wire


Cobalt Free Batteries Market

InsightAce Analytic Pvt. Ltd. announces the release of a market assessment report on the “Cobalt Free Batteries Market”-, By Type (Lithium Ferrous (Iron) Phosphate Battery, Lead Acid Batteries), By End Use (Electronic Vehicles, Energy Storage), Trends, Industry Competition Analysis, Revenue and Forecast To 2031.”The Cobalt Free Batteries Market is estimated to reach over USD 4.10 billion by 2031, exhibiting a CAGR of 14.4% during the forecast period.

Get Free Access to Demo Report, Excel Pivot and ToC: https://www.insightaceanalytic.com/request-sample/2599

Rechargeable batteries without cobalt are known as cobalt-free batteries. Rather, they usually include substitute elements like iron, manganese, or nickel, which helps mitigate supply chain problems associated with cobalt sourcing and reduces the environmental impact. However, battery makers are adopting cobalt-free materials because of the ethical difficulties surrounding cobalt mining. The rising use of cobalt-free cathode in electric vehicles is responsible for the market’s expansion. Since cobalt is a scarce and expensive metal, producing lithium-ion batteries is expensive. Because of this, the price of electric cars rises, which is why researchers are concentrating on producing large quantities of the cobalt-free cathode that is used in batteries. The primary driver of market expansion is expected to be the increasing demand for electric automobiles.

In the upcoming years, the market is expected to develop due to rising global disposable income and consumer preference for environmentally friendly automobiles like electric cars. The high cost of fossil fuels and government attempts to minimize pollution are expected to propel market expansion throughout the forecast period.

List of Prominent Players in the Cobalt Free Batteries Market:• Conamix• AESC• BYD• CALB• CATL• Cobra• OptimumNano Energy Co., Ltd.• Sunwoda Electronic Co., Ltd.• Panasonic• Murata• Toshiba• LITHIUMWERKS• SVOLT• SPARKZ

Expert Knowledge, Just a Click Away: https://calendly.com/insightaceanalytic/30min?month=2025-02

Market Dynamics:Drivers:The increasing cost of fossil fuels and rising consumption of petroleum products in the combustion is estimated to drive the market growth as per the market analysis. The market is expected to grow in the upcoming years due to the growing number of electric vehicles on the road worldwide, rising disposable income, and rising economic standards. Throughout the projected period, growing concerns about the environmental damage caused by using fossil fuels are also expected to propel market expansion.

Challenges:The development of cobalt-free battery technology may be complicated and hampered by technological challenges. Finding replacement materials with energy density, stability, and cycle life that are on par with or better than those of cobalt-containing batteries is a major barrier to the market’s expansion. Other challenges include strict government regulations, disruptions in the supply chain, and changing consumer preferences.

Regional Trends:North America is expected to report the largest market share in the near future. Major automotive manufacturers are increasingly investing in EV production, driving demand for advanced battery technologies. The established battery manufacturing infrastructure in North America allows producers to quickly increase output and meet the growing demand for cobalt-free batteries. A robust supply chain for vital battery components like nickel and lithium also helps the area keep a competitive edge in the market. The cobalt-free battery market is expected to grow at the fastest rate in Asia Pacific for the forecast timeframe. The Asia Pacific area has shown considerable growth in the global cobalt-free battery market, which can be attributed to the drive for renewable energy sources and the creation of regulations that promote them.

Recent Developments:• In July 2024, AESC Group, launched a new facility in Spain in 2026 to begin manufacturing a less expensive lithium-ion battery substitute, according to the Japan-based business. With an eye toward the demand for lithium-iron-phosphate (LFP) batteries for use in electric vehicles and power storage systems, AESC plans to invest over 1 billion euros ($1.09 billion) in the new factory.• In Nov 2023, Toshiba Corporation has created a novel lithium-ion battery with a 5V-class high-potential cathode material that is free of cobalt and effectively inhibits side reactions that release gasses that degrade performance. This battery can be used for a variety of things, including electric cars and power tools.

Unlock Your GTM Strategy: https://www.insightaceanalytic.com/customisation/2599

Segmentation of Cobalt Free Batteries Market.Global Cobalt Free Batteries Market – By Type• Lithium Ferrous (Iron) phosphate Battery• Lead acid batteriesGlobal Cobalt Free Batteries Market – By End Use• Electronic Vehicles• Energy StorageGlobal Cobalt Free Batteries Market – By RegionNorth America-• The US• Canada• MexicoEurope-• Germany• The UK• France• Italy• Spain• Rest of EuropeAsia-Pacific-• China• Japan• India• South Korea• Southeast Asia• Rest of Asia PacificLatin America-• Brazil• Argentina• Rest of Latin AmericaMiddle East & Africa-• GCC Countries• South Africa• Rest of the Middle East and Africa

About Us:InsightAce Analytic is a market research and consulting firm that enables clients to make strategic decisions. Our qualitative and quantitative market intelligence solutions inform the need for market and competitive intelligence to expand businesses. We help clients gain competitive advantage by identifying untapped markets, exploring new and competing technologies, segmenting potential markets and repositioning products. Our expertise is in providing syndicated and custom market intelligence reports with an in-depth analysis with key market insights in a timely and cost-effective manner.

Contact us:InsightAce Analytic Pvt. Ltd.Visit: http://www.insightaceanalytic.comTel : +1 551 226 6109Asia: +91 79 72967118info@insightaceanalytic.com

This release was published on openPR.

About Web3Wire Web3Wire – Information, news, press releases, events and research articles about Web3, Metaverse, Blockchain, Artificial Intelligence, Cryptocurrencies, Decentralized Finance, NFTs and Gaming. Visit Web3Wire for Web3 News and Events, Block3Wire for the latest Blockchain news and Meta3Wire to stay updated with Metaverse News.



Source link

Grayscale Seeks Cardano ETF Approval After Filing for Solana, XRP and Dogecoin Funds – Decrypt

0
Grayscale Seeks Cardano ETF Approval After Filing for Solana, XRP and Dogecoin Funds – Decrypt



Grayscale Investments aims to launch a Cardano exchange-traded fund on the New York Stock Exchange, according to a document posted to the NYSE website. NYSE Arca submitted the 19b-4 form for a proposed rule change Monday on behalf of the issuer.

Cardano (ADA), the ninth-largest cryptocurrency by market capitalization, has jumped 12% on the day to $0.748 on news of the Cardano ETF filing, according to CoinGecko. It’s the largest gainer over the last day among the top 10 coins by market cap.

If approved, Grayscale’s Cardano ETF would be the first exchange-traded product for the blockchain, and would join Grayscale’s lineup of crypto ETFs, which includes the Grayscale Bitcoin Trust ETF, Bitcoin Mini Trust ETF, Ethereum Trust ETF, and Ethereum Mini ETF.

“The proposed rule change is designed to promote just and equitable principles of trade and to protect investors and the public interest in that there is a considerable amount of ADA price and market information available on public websites and through professional and subscription services,” Grayscale said in the Cardano ETF filing.

Grayscale currently offers over 20 crypto-related investment products, including trusts for Avalanche, Solana, and Dogecoin. The firm launched the Bitcoin Trust in 2013, which was converted into an ETF last year.

Only Bitcoin and Ethereum spot ETFs have been approved for trading so far in the United States, with the SEC giving both the green light last year.

According to the filing, the proposed rule change for the Cardano ETF falls under NYSE Arca Rule 8.201-E, which allows the listing of “Commodity-Based Trust Shares.”

The Delaware Trust Company serves as the trustee of the Cardano ETF, while the Coinbase Custody Trust Company holds the Trust’s ADA in cold storage.

Grayscale did not immediately respond to requests for comments by Decrypt.

As investors wait to see if the Cardano ETF is approved, the crypto faithful are optimistic that other ETFs—including Dogecoin, XRP, and Solana ETFs—will be approved in the coming months.

Grayscale itself has filed to convert other existing trusts into spot ETFs, including trusts for Solana, XRP, and Dogecoin.

Edited by Andrew Hayward

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.



Source link

Discover Why LoRA Adapters Lead the Future of Fine-Tuning

0
Discover Why LoRA Adapters Lead the Future of Fine-Tuning


As large language models (LLMs) evolve, the demand for efficient, scalable, and cost-effective fine-tuning methods increases. Traditional fine-tuning techniques require updating all model parameters, which consumes significant computational power, memory, and time. Low-rank adaptation (LoRA) has emerged as a revolutionary method that enables precise fine-tuning with minimal computational overhead. This article explores, in-depth, why LoRA adapters represent the future of fine-tuning.

1. LoRA Matches Full Fine-Tuning Performance While Cutting Computational Load

LoRA maintains a model’s performance while dramatically reducing computational costs. Instead of modifying all parameters, LoRA fine-tunes a small subset by adjusting lower-rank matrices. This selective approach reduces training overhead while preserving accuracy across tasks. Studies comparing LoRA with full fine-tuning on RoBERTa and DeBERTa confirm that LoRA achieves nearly identical performance across multiple benchmarks while significantly lowering resource consumption (Hu et al., 2021).

By updating only a fraction of the model’s parameters, LoRA reduces the need for multiple high-end GPUs. Organizations can fine-tune their models using standard cloud infrastructure instead of investing in expensive hardware, making AI deployment more accessible.

One of the biggest challenges in full fine-tuning is the immense memory overhead. LoRA solves this issue by minimizing the number of trainable parameters. For instance, in RoBERTa large, full fine-tuning requires updating over 350 million parameters. LoRA fine-tuning, however, reduces the trainable parameters to as little as 0.2%, cutting memory requirements drastically.

The concept of reducing trainable parameters through Low-Rank Adaptation (LoRA) is detailed in the paper “LoRA: Low-Rank Adaptation of Large Language Models” by Edward J. Hu et al. (2021). LoRA achieves parameter efficiency by factorizing the weight update matrix ΔW into two low-rank matrices, A and B, where:

Here:

A is a matrix of shape (d × r),

B is a matrix of shape (r × d),

r (rank) is a much smaller value than d, ensuring significant parameter reduction.

Let’s consider a scenario where the original weight matrix W has a shape of 1024 × 1024, which contains:

1024×1024=1,048,576 parameters1024. Using LoRA with a rank of 8, the two factorized matrices will have dimensions:

A → (1024 × 8)

B → (8 × 1024)

The total number of parameters in these matrices is

This results in:

Thus, instead of updating 1.05 million parameters, LoRA fine-tunes under 16,384 parameters, leading to a massive 98.4% reduction in trainable parameters.

LoRA’s architecture allows models to operate within the same infrastructure used for inference. Thus, organizations no longer need massive GPU clusters to fine-tune models effectively. As a result, LoRA makes advanced AI development more accessible to startups and smaller enterprises that lack the resources for extensive training setups.

3. LoRA Accelerates Training and Improves Throughput

Because LoRA fine-tunes only a subset of parameters, it allows for larger batch sizes during training. Increasing the batch size speeds up training while maintaining the model’s accuracy. LoRA enables parallelized computations by reducing memory overhead, leading to faster convergence times.

Fine-tuning a large model with traditional methods can take weeks and consume vast computational resources. LoRA, however, enables organizations to train models in a fraction of that time. Businesses can iterate quickly, optimizing their models for different use cases without extended downtimes. This improvement is critical in industries like finance and healthcare, where models must adapt rapidly to new data and regulations (Xia et al., 2022).

4. LoRA Enables Cost-Effective Multi-Model Deployments

LoRA’s modular approach simplifies the deployment of multiple fine-tuned models. Organizations typically maintain several customized versions of a base model to cater to different clients or applications. Hosting separate full fine-tuned models, however, demands immense computational and storage resources.

The technical report titled “LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4” provides an in-depth evaluation of LoRAX’s capabilities in efficiently serving multiple fine-tuned models. The study demonstrates that LoRAX, an open-source Multi-LoRA inference server, facilitates the deployment of numerous LoRA fine-tuned models on a single GPU by utilizing shared base model weights and dynamic adapter loading. This approach significantly reduces deployment costs and enhances scalability. The report highlights that 4-bit LoRA fine-tuned models outperform base models by 34 points and GPT-4 by 10 points on average across various tasks.

With LoRAX, organizations only need to maintain a single large model while serving multiple specialized models on demand. This capability unlocks the potential for massive scalability while keeping operating expenses low. Businesses can now personalize AI experiences for multiple customers without running hundreds of dedicated models (Wang et al., 2023).

5. LoRA Supports Continuous Innovation and Versatile Adaptation

LoRA’s evolution does not stop at weight-efficient fine-tuning. Researchers are continuously enhancing its capabilities to make it even more effective. Several advancements are on the horizon, including:

Text embedders to improve Retrieval-Augmented Generation (RAG) systems by optimizing search queries.

Multi-head decoders like Medusa that triple token generation speeds, enabling faster inference.

Task-specific adapters for improving domain-specific applications such as legal document classification and financial forecasting.

These innovations expand the applicability of LoRA across multiple fields, ensuring that it remains a relevant and growing technology (Huang et al., 2023).

6. LoRA Enhances Model Robustness

LoRA ensures that models generalize better by adapting only necessary parameters, reducing overfitting. Traditional full fine-tuning may cause models to overlearn specific datasets, reducing flexibility. LoRA preserves core knowledge while fine-tuning for niche tasks.

The paper “LoRA Dropout as a Sparsity Regularizer for Overfitting Control” discusses introducing random noise to LoRA’s learnable parameters to control overfitting during fine-tuning. This approach helps maintain the model’s core knowledge while adapting to specific tasks.

7. LoRA Enables Domain-Specific Fine-Tuning

Many industries require models with specialized knowledge. LoRA makes it easier to create LLMs tailored for domains like legal, healthcare, and finance by training lightweight adapters without altering the base model’s fundamental understanding.

Several sources support the concept that Low-Rank Adaptation (LoRA) facilitates domain-specific fine-tuning by training lightweight adapters without altering the base model’s core understanding. For instance, a blog post on Run.ai discusses how LoRA adapters enable efficient fine-tuning of large language models by adjusting smaller parameters, which is particularly beneficial for adapting models to specific domains.

NVIDIA’s developer blog highlights that fine-tuning with LoRA on domain-specific datasets significantly enhances translation quality within those domains, demonstrating LoRA’s effectiveness in specialized applications. DEVELOPER.NVIDIA.COM These sources provide insights into how LoRA can be applied to create models tailored for specific industries such as legal, healthcare, and finance.

8. LoRA Improves Edge AI Deployment

With LoRA’s reduced computational and memory footprint, AI models can be efficiently deployed on edge devices like smartphones and IoT systems. This ensures powerful AI capabilities without relying on cloud-based inference.

A notable study, “Skip2-LoRA: A Lightweight On-device DNN Fine-tuning Method for Low-cost Edge Devices,” introduces Skip2-LoRA. This method integrates LoRA adapters to boost network expressive power while maintaining low computational costs. This approach is particularly suitable for fine-tuning deep neural networks on resource-constrained edge devices like single-board computers.

The study reports that Skip2-LoRA reduces fine-tuning time by 90% on average compared to counterparts with the same number of trainable parameters while preserving accuracy. These findings suggest that LoRA’s reduced computational and memory footprint facilitates the efficient deployment of AI models on edge devices like smartphones and IoT systems, ensuring robust AI capabilities without reliance on cloud-based inference.

9. LoRA Allows Quick Model Updates

LoRA enables rapid fine-tuning without full retraining, allowing AI models to stay updated with new trends, regulations, or datasets. This capability is crucial for AI applications that need frequent updates without downtime.

Research supports that “LoRA enables rapid fine-tuning without full retraining, allowing AI models to stay updated with new trends, regulations, or datasets.” This approach facilitates efficient model updates by fine-tuning only a small subset of parameters. It reduces the computational resources and time required for model adaptation, making it particularly beneficial for applications needing frequent updates. For instance, IBM Research highlights that LoRA is a faster, cheaper way of turning large language models into specialists, enabling quick adaptation to new information.

Conclusion

LoRA represents a fundamental shift in AI fine-tuning. By achieving performance parity with full fine-tuning while reducing memory usage, computational costs, and training time, LoRA provides an unmatched advantage in AI scalability and efficiency. As researchers refine LoRA’s capabilities, its role in AI development will only grow stronger.

LoRA is the key to balancing performance, cost, and scalability for businesses and researchers aiming to optimize AI deployment. By embracing LoRA-based fine-tuning, organizations can unlock unprecedented flexibility and efficiency in building powerful AI applications.



Source link

Leap 2025 Marks Saudi Arabia’s Bold Move With a 14.9 billion AI Investment – Web3oclock

0
Leap 2025 Marks Saudi Arabia’s Bold Move With a 14.9 billion AI Investment – Web3oclock


Massive Investments Drive AI and Digital Expansion:

Developing cutting-edge AI and cloud infrastructure

Empowering digital skills and talent

Supporting tech startups and entrepreneurship

Establishing Saudi Arabia as MENA’s largest digital economy

Tech Giants Back Saudi Arabia’s AI Boom:

Groq & Aramco Digital – Investing $1.5 billion in AI-powered cloud computing, strengthening Saudi Arabia’s role in AI leadership.

Alat & Lenovo – Confirming $2 billion for a robotics-based AI manufacturing hub in Saudi Arabia, alongside Lenovo’s regional headquarters in Riyadh.

Google – Launching a global AI hub in Saudi Arabia to cater to regional and international demand.

Qualcomm – Introducing the ALLaM language model on the Qualcomm AI Cloud, enhancing cloud-based AI solutions.

Alibaba Cloud – Partnering with Tuwaiq Academy and STC to train Saudi talent in AI and emerging technologies.

Databricks – Investing $300 million in Platform-as-a-Service (PaaS) solutions, driving AI expertise in the Kingdom.

SambaNova – Pledging $140 million for advanced AI infrastructure, reinforcing Saudi Arabia’s innovation ecosystem.

KKR & Gulf Data Hub – Announcing a 300MW data center investment, strengthening cloud computing and AI capabilities.

Salesforce – Expanding its Hyperforce platform with a $500 million investment, serving regional customers from Saudi Arabia.

Tencent Cloud – Investing $150 million to establish its first cloud region in the Middle East with integrated AI technologies.

Saudi Arabia: A Global AI Powerhouse



Source link

Popular Posts

My Favorites