[ad_1]
Alibaba has launched a new artificial intelligence model that the company says can understand images and carry out more complex conversations than the company’s previous products, as the global race for leadership in the technology heats up.
According to CNBC, the Chinese technology giant said that its two new models, Qwen-VL and Qwen-VL-Chat, will be open source — meaning that researchers, academics, and companies worldwide can use them to create their own AI apps without needing to train their own systems, therefore saving time and expense.
Alibaba said that Qwen-VL can respond to open-ended queries related to different images and generate picture captions.
Qwen-VL-Chat meanwhile caters to more “complex interaction,” according to Alibaba, such as comparing multiple image inputs and answering several rounds of questions. Some tasks that Alibaba says Qwen-VL-Chat can perform include writing stories and creating images based on photos that a user inputs, as well as solving mathematical equations shown in a picture.
One example Alibaba gave is of an input featuring a hospital sign in the Chinese language. The AI can answer questions about the locations of certain hospital departments by interpreting the image of the sign.
So far, much of generative AI — where the technology generates responses based on human inputs — has focused on responding to text. The latest version of OpenAI’s ChatGPT also has the ability to understand images and respond in text, much like Qwen-VL-Chat.
Alibaba’s two latest models are built upon the company’s large language model called Tongyi Qianwen, released earlier this year. An LLM is an AI model trained on huge amounts of data and underpins chatbot applications.
Th Hangzhou-headquartered company this month open sourced two other AI models. While not earning Alibaba any licensing fees, the open-source distribution will help the company get more users for its AI model — at a time when the firm’s cloud division is looking to reignite growth, as it prepares to go public.
[ad_2]