Weco is an AI-driven platform designed to automate and optimize machine learning (ML) experiments, enabling engineers to enhance their workflows efficiently. By leveraging large language models (LLMs), Weco systematically refines code against user-defined metrics, such as speed, accuracy, latency, or cost, without the need for constant supervision. This continuous, automatic process allows for the exploration of numerous targeted experiments, integrating each outcome into a live tree search to iteratively improve performance.
Key Features and Functionality:
- Automated Code Optimization: Weco's core engine, AIDE, employs a tree search approach guided by LLMs to iteratively explore and refine code, applying changes, running evaluation scripts, and proposing further improvements based on specified goals.
- Versatile Application: The platform supports a wide range of tasks, including GPU kernel optimization, model development, and prompt engineering, accommodating various programming languages and frameworks.
- Real-Time Dashboard: Users can monitor optimization progress through an interactive dashboard, providing visual tracking, solution tree exploration, and run management capabilities.
- Credit-Based Pricing: Weco offers a simple credit-based pricing model, with a free tier providing 20 credits, approximately equivalent to 100 optimization steps on GPT-5, allowing users to get started without a credit card.
Primary Value and User Solutions:
Weco addresses common challenges in ML experimentation, such as time-consuming manual iterations, performance bottlenecks, and the need for expert-level code optimization. By automating the experimentation process, Weco enables engineers to focus on strategic decision-making and innovation, leading to faster breakthroughs and more efficient ML pipelines. Its ability to run code locally, test numerous variations, and identify real metric gains without guesswork empowers users to achieve superior results with reduced effort.