Product Introduction
DataCanvas strives to provide flexible and efficient computing solutions based on our high-performance intelligent computing resources and strong technical capabilities.
Alaya NeW Computing Services
Alaya NeW offers a core computing service called Virtual Kubernetes Services(VKS), and a comprehensive suite of large language model tools. This comprehensive toolkit addresses the diverse needs of users in AI model training, inference, agent development, and high-performance computing, thereby accelerating the realization of business value.
Advantages
Pay-as-you-go Billing
Alaya NeW employs a serverless high-performance computing architecture that flexibly schedules and manages computing resources using Virtual Kubernetes Services (VKS). Resources can be dynamically allocated based on the needs of different jobs or projects, allowing users to adjust the computing capacity according to actual requirements, thus adapting flexibly to business demands. Computing consumption is only billed for active computing tasks, allowing users to utilize computing power as flexibly and conveniently as electricity, so that users can focus on AI training and inference tasks.
High-Performance Computing Resources
- Alaya NeW computing service provides intelligent computing resources distributed nationwide, supporting mainstream high-performance accelerator cards and their combinations. Leveraging capabilities in resource adaptation, management, scheduling, and optimization, along with specialized scheduling algorithms and strategies for large model tasks, it significantly enhances the performance of computing services.
- With highly integrated storage technology and innovative system design, the service is optimized for large model acceleration and comprehensively supports various storage protocols to achieve optimal performance across different application demands and computing scenarios.
- High-performance networks are built on a non-blocking, high-bandwidth topology, supporting high-speed InfiniBand networks and optimized communication algorithms, allowing computing nodes to respond quickly to diverse computing needs.
One-Stop AI Development
Alaya NeW computing service is designed for the entire AI modeling lifecycle, covering all links from environment setup, data processing, training fine-tuning to inference. It features a plug-and-play toolchain for large models, including LM Lab and Inference, along with a rich library of open-source large models to meet various application requirements. With the Alaya NeW computing service, users can easily and efficiently engage in AI training fine-tuning and inference applications.
Global Acceleration Optimization
Alaya NeW computing service implements global optimization for large model infrastructure: through algorithm acceleration, compilation optimization, memory optimization, and communication acceleration, it enhances training efficiency by 100%, increases GPU utilization by 50%, and boosts inference speed by four times, offering users out-of-the-box high-performance model training services, secure high-performance private model repositories, and dynamic model inference services.
Application Scenarios
Alaya NeW computing service focuses on core artificial intelligence tasks, providing an integrated service that includes “computing, data, algorithms, and scheduling.” It supports scenarios such as AI training fine-tuning, inference, agent construction, and high-performance computing applications.
AI Training Fine-Tuning
For deep learning training loads, it involves processing large batches of data, such as images, audio, and text, which requires constant updates and iterations of parameters within the neural network to meet business prediction accuracy demands. The Alaya NeW Cloud provides high-performance accelerator cards that support Tensor Core functions to accelerate large matrix computations in deep learning and shorten the convergence time of network models.
Alaya NeW computing service, based on VKS, provides integrated computing resources and tool support, helping users efficiently conduct large-scale AI training. Users can flexibly plan and adjust the configuration and scale of computing resources in real-time based on model complexity, data scale, task progress, and resource usage. Additionally, Alaya NeW provides large model construction tools LM Lab, enabling efficient development of high-quality AI-specific large models for algorithm engineers and AI developers, facilitating exploration of large model applications in enterprises.
AI Inference
Alaya NeW Cloud computing service based on VKS offers integrated computing resources and tool support for efficient real-time inference and batch processing. The VKS leverages high-performance accelerator cards and virtualization technology to rapidly receive and process massive inference requests in real-time, automatically balancing loads between different inference instances. Users can also adjust computing resource configuration in real-time according to actual request volumes. Furthermore, Alaya NeW provides AI model inference tools Inference, which supports various business scenarios such as large language models, computer vision, and natural language processing. The LLM large language model service supports the deployment, compression, and inference capabilities of large language models, enabling enterprises to realize AI-generated content (AIGC) applications.
AI Agent Construction
AI Agents rely on powerful support from large models, which endow them with outstanding language understanding and generation capabilities through pre-training and fine-tuning. AI Agents demonstrate immense application value in customer service, medical diagnosis, financial services, educational assistance, smart home devices, and other areas, enhancing automation and intelligence across industries.
Alaya NeW computing service offers integrated computing resources and tool support to help users efficiently engage in AI Agent construction. Users can configure and adjust the resource scale according to model complexity and data scale, saving costs. Additionally, Alaya NeW provides the agent construction tool Alaya Studio, which has large model agent building capabilities and integrates APIs for large language models, vector models, and multimodal large models. Users can apply models across different platforms and fine-tune open-source models using their own data, while also developing and publishing Agents as APIs for external system calls.
High-Performance Computing
High-performance computing (HPC) is widely used in various scientific and engineering applications, including image processing and computer vision, engineering and industrial applications, and artificial intelligence and machine learning, typically requiring high-precision computing power to meet application accuracy requirements. The Alaya NeW Cloud provides large-scale parallel computing capabilities through high-performance computing clusters, meeting user demand for computing resources across various HPC application scenarios.
Alaya NeW computing service offers integrated computing resources and tool support for efficiently conducting high-performance computing (HPC) tasks. Leveraging the elastic scalability of VKS, users can adjust computing resource allocations in real-time based on parallel computing needs, data density, task progress, and resource usage to ensure that computing performance matches task requirements, achieving cost savings.