2026 Best Software Awards are here!See the list
Search results for
nodejs
(8819)
Star Rating
Categories
Categories(5)
Sellers(190)
Solutions(0)
Discussions(15)
Articles(269)
Advanced FiltersClear Filter
Star Rating
Categories
The next elements are filters and will change the displayed results once they are selected.
Product Description

MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code. This model was trained by MosaicML. MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference. These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing positional embeddings with Attention with Linear Biases (

Not seeing what you're looking for?

Dive deeper into "nodejs" on G2 AI

Product Description

Phi-3.5-mini is a lightweight, state-of-the-art language model developed by Microsoft, designed to deliver high-quality reasoning capabilities within a compact architecture. Building upon the datasets used for Phi-3, it focuses on very high-quality, reasoning-dense data, including synthetic data and filtered publicly available websites. The model supports a 128K token context length, enabling it to handle extensive inputs effectively. Through rigorous enhancement processes such as supervised fin

Product Description

Aleph Alpha's LLM-powered agent accelerates complex semiconductor documentation retrieval, reducing search time by 90%.

Product Description

Granite-3.2-2B-Instruct is a 2-billion-parameter language model developed by IBM's Granite Team, designed to handle a wide range of instruction-following tasks. Built upon its predecessor, Granite-3.1-2B-Instruct, this model has been fine-tuned using a combination of permissively licensed open-source datasets and internally generated synthetic data, focusing on enhancing reasoning capabilities. It supports multiple languages, including English, German, Spanish, French, Japanese, Portuguese, Arab

Product Description

BLOOM-7B1 is a multilingual language model developed by BigScience, designed to generate human-like text across 48 languages. With over 7 billion parameters, it leverages a transformer-based architecture to perform tasks such as text generation, translation, and summarization. Trained on diverse datasets, BLOOM-7B1 aims to provide accurate and contextually relevant outputs, making it a valuable tool for researchers and developers in natural language processing. Key Features and Functionality

Product Description

Llama 3.2 3B Instruct is a 3-billion parameter multilingual large language model developed by Meta, designed to excel in conversational AI applications. It leverages an optimized transformer architecture and has been fine-tuned using supervised learning and reinforcement learning with human feedback to enhance its performance in generating contextually relevant and coherent responses. Key Features and Functionality: - Multilingual Proficiency: Supports multiple languages, enabling seamle

Product Description

Codestral is an open-weight generative AI model developed by Mistral AI, specifically designed for code generation tasks. It assists developers in writing and interacting with code through a unified instruction and completion API endpoint. Proficient in over 80 programming languages—including Python, Java, C, C++, JavaScript, and Bash—Codestral also supports less common languages like Swift and Fortran, making it versatile across various coding environments. Key Features and Functionality:

Product Description

Granite-3.3-2B-Instruct is a 2-billion parameter language model developed by IBM's Granite Team, designed to enhance reasoning and instruction-following capabilities. With a context length of 128K tokens, it builds upon the Granite-3.3-2B-Base model, delivering significant improvements in benchmarks such as AlpacaEval-2.0 and Arena-Hard, as well as in mathematics, coding, and instruction-following tasks. The model supports structured reasoning through the use of `` and `` tags, allowing for clea

Product Description

NVIDIA Nemotron-Nano-9B-v2 is a compact, open-source language model designed to deliver high-performance reasoning and agentic capabilities. Utilizing a hybrid Mamba-Transformer architecture, it efficiently processes long-context sequences up to 128,000 tokens, making it suitable for complex tasks requiring extensive context understanding. The model supports multiple languages, including English, German, French, Italian, Spanish, and Japanese, and excels in instruction following and code generat

Product Description

Granite-3.2-8B-Instruct is an 8-billion-parameter AI model fine-tuned for advanced reasoning tasks. Built upon its predecessor, Granite-3.1-8B-Instruct, it has been trained using a combination of permissively licensed open-source datasets and internally generated synthetic data tailored for complex problem-solving. The model offers controllable reasoning capabilities, ensuring its application is precise and contextually appropriate. Key Features and Functionality: - Advanced Reasoning: E

Product Description

BLOOM-1b7 is a transformer-based language model developed by the BigScience Workshop, designed to generate human-like text across 48 languages. As a scaled-down variant of the larger BLOOM model, it offers a balance between performance and computational efficiency, making it suitable for a wide range of natural language processing tasks. Key Features and Functionality: - Multilingual Support: Capable of understanding and generating text in 48 languages, facilitating diverse linguistic ap

Product Description

Granite-4.0-Tiny-Base-Preview is a 7-billion-parameter hybrid mixture-of-experts (MoE) language model developed by IBM's Granite Team. It features a 128,000-token context window and utilizes the Mamba-2 architecture combined with softmax attention to enhance expressiveness. Notably, it omits positional encoding to improve length generalization. Key Features and Functionality: - Extensive Context Window: Supports up to 128,000 tokens, facilitating the processing of lengthy documents and c

Product Description

Gemma 3 270M is a compact, text-only model within the Gemma family of generative AI models, designed to perform a variety of text generation tasks such as question answering, summarization, and reasoning. With 270 million parameters, it offers a balance between performance and efficiency, making it suitable for applications with limited computational resources. Key Features and Functionality: - Text Generation: Capable of generating coherent and contextually relevant text for tasks like

Product Description

Phi-4-mini-reasoning is a compact, transformer-based language model developed by Microsoft, specifically optimized for mathematical reasoning tasks. With 3.8 billion parameters and support for a 128K token context length, it delivers high-quality, step-by-step problem-solving capabilities in environments where computational resources or latency are constrained. Fine-tuned using synthetic mathematical data generated by a more advanced model, Phi-4-mini-reasoning excels in multi-step, logic-intens

Product Description

Llama 3.2 1B Instruct is a multilingual large language model developed by Meta, designed to facilitate advanced natural language understanding and generation across multiple languages. With 1 billion parameters, this model is optimized for tasks such as dialogue generation, summarization, and agentic retrieval, offering robust performance in diverse linguistic contexts. Its architecture incorporates supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align outpu

Product Description

Granite-3.1-1B-A400M-Base is a language model developed by IBM's Granite Team, designed to handle extensive context lengths up to 128K tokens. This model is based on a decoder-only sparse Mixture of Experts (MoE) transformer architecture, incorporating fine-grained experts, dropless token routing, and load balancing loss. It supports multiple languages, including English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Key Features and Funct

Product Description

Gemma 3n is a generative AI model optimized for deployment on everyday devices such as smartphones, laptops, and tablets. It introduces innovations in parameter-efficient processing, including Per-Layer Embedding (PLE) parameter caching and the MatFormer architecture, which collectively reduce computational and memory demands. The model supports audio, text, and visual inputs, enabling a wide range of applications from speech recognition to image analysis. Key Features and Functionality:

Product Description

The Phi-3-Small-128K-Instruct is a 7-billion-parameter, state-of-the-art language model developed by Microsoft. It is part of the Phi-3 family and is designed to handle a context length of up to 128,000 tokens. Trained on a combination of synthetic data and filtered publicly available web content, the model emphasizes high-quality, reasoning-dense properties. Post-training processes, including supervised fine-tuning and direct preference optimization, have been applied to enhance its instruction

Product Description

Stable LM 2 12B is a 12.1 billion parameter decoder-only language model developed by Stability AI. Pre-trained on 2 trillion tokens from diverse multilingual and code datasets over two epochs, it is designed to generate coherent and contextually relevant text across various applications. The model employs a transformer decoder architecture with 40 layers, a hidden size of 5120, and 32 attention heads, supporting a sequence length of up to 4096 tokens. Key features include the use of Rotary Posit

Product Description

StepFun is an innovative technology company specializing in the development of advanced artificial intelligence (AI) models and tools designed to enhance human-AI collaboration across various domains. By integrating cutting-edge research with practical applications, StepFun aims to provide solutions that streamline complex tasks, improve efficiency, and foster creativity. Key Features and Functionality: - Multimodal AI Models: StepFun has developed models like Step3, a multimodal reasoning mod