Microsoft Kosmos-1

A Multimodal Large Language Model
 A Multimodal Large Language Model
Product Information
This tool is verified because it is either an established company, has good social media presence or a distinctive use case
Release date22 June, 2023
PlatformDesktop

Microsoft Kosmos-1 Features

Microsoft has unveiled Kosmos-1, which it describes as a multimodal large language model (MLLM) that can not only respond to language prompts but also visual cues, which can be used for an array of tasks, including image captioning, visual question answering, and more. Microsoft's Kosmos-1 can take image and audio prompts, paving the way for the next stage beyond ChatGPT's text prompts. The KOSMOS-1 model natively supports language, perception-language, and vision activities, as indicated in Table 1. They use webscale multimodal datasets to train the model, including text data, image-text pairings, and arbitrarily interleaved pictures and words. Moreover, they transmit language-only data to assess the capacity to follow instructions across modalities. The KOSMOS-1 models can naturally handle perception-intensive tasks and natural language tasks. These tasks include visual dialogue, visual explanation, visible question answering, image captioning, simple math equations, OCR, and zero-shot image classification with descriptions. Source: https://www.zdnet.com/article/now-microsoft-has-a-new-ai-model-kosmos-1/

Trends prompts: