Synesia

How LLM Output format influence result

Ac­cord­ing to the com­pa­ny’s In­sti­tute of AI, one of the mod­els, TeleChat2-115B and an­oth­er un­named mod­el were trained on tens of thou­sands of Chi­nese-made chips. This achieve­ment is es­pe­cial­ly note­wor­thy giv­en the tighter US ex­port rules that have lim­it­ed Chi­na’s abil­i­ty to pur­chase high-end proces­sors from Nvidia and oth­er for­eign com­pa­nies. In a state­ment shared on WeChat, the AI in­sti­tute claimed that this ac­com­plish­ment demon­strat­ed Chi­na’s ca­pa­bil­i­ty to in­de­pen­dent­ly train LLMs and sig­nals a new era of in­no­va­tion and self-re­liance in AI tech­nol­o­gy.

The scale of these mod­els is re­mark­able. Chi­na Tele­com stat­ed that the un­named LLM has one tril­lion pa­ra­me­ters. In AI ter­mi­nol­o­gy, pa­ra­me­ters are the vari­ables that help the mod­el in learn­ing dur­ing train­ing. The more pa­ra­me­ters there are, the more com­pli­cat­ed and pow­er­ful the AI be­comes.


According to the company’s Institute of AI, one of the models, TeleChat2-115B and another unnamed model were trained on tens of thousands of Chinese-made chips. This achievement is especially noteworthy given the tighter US export rules that have limited China’s ability to purchase high-end processors from Nvidia and other foreign companies. In a statement shared on WeChat, the AI institute claimed that this accomplishment demonstrated China’s capability to independently train LLMs and signals a new era of innovation and self-reliance in AI technology.

The scale of these models is remarkable. China Telecom stated that the unnamed LLM has one trillion parameters. In AI terminology, parameters are the variables that help the model in learning during training. The more parameters there are, the more complicated and powerful the AI becomes.