Meta on Friday, October 18, announced that it has released a batch of new artificial intelligence models from its research unit.
The mentioned line of virtual products, among others, includes a very notable development called Self-Taught Evaluator, which can offer a path toward less human involvement in the AI elaborating process. It is worth noting that Meta first introduced the specified tool in an August paper. At that time, the technology giant detailed how it relies upon the same chain of thought technique used by OpenAI’s o1 models to get it to make reliable judgments about models’ responses. The corresponding technique involves breaking down complex problems into smaller logical steps and appears to improve the accuracy of responses on challenging problems in subjects like science, coding, and math.
To train the evaluator model Meta’s researchers used data generated by artificial intelligence.
Other machine intelligence tools presented by the technology giant on Friday include an update to the image-identification Segment Anything model, a tool that speeds up large language models’ response generation times and datasets that can be used to aid the discovery of new inorganic materials.
It is worth noting that cybersecurity is of particular importance during the period of active development and intensive spread of artificial intelligence. Fraudsters also have access to AI technologies, which has made their activities more sophisticated. In the context of the corresponding threat in cyberspace, the personal awareness of users is important. For example, a query on an Internet search engine, such as how to know if my camera is hacked, will provide anyone with information about signs of unauthorized access to a device.