On July 2nd, Zhipu AI officially released and open-sourced the large visual language model GLM-4.1V-Thinking. It is reported that GLM-4.1V-Thinking is a general reasoning large model that supports multimodal inputs such as images, videos, and documents, and is specially designed for complex cognitive tasks. In addition, Zhipu has launched a brand-new ecosystem platform, "Agent Application Space", and initiated the "Agents Pioneer Program", investing hundreds of millions of yuan to comprehensively support AI Agents start-up teams
Share this post
Zhipu AI has released and open-sourced the large visual language model GLM-4.1V-Thinking
Share this post
On July 2nd, Zhipu AI officially released and open-sourced the large visual language model GLM-4.1V-Thinking. It is reported that GLM-4.1V-Thinking is a general reasoning large model that supports multimodal inputs such as images, videos, and documents, and is specially designed for complex cognitive tasks. In addition, Zhipu has launched a brand-new ecosystem platform, "Agent Application Space", and initiated the "Agents Pioneer Program", investing hundreds of millions of yuan to comprehensively support AI Agents start-up teams
.