In a recent announcement, Meta unveiled its plans to develop recommendation models that surpass the scale of existing language models, such as ChatGPT and GPT-4.
While this endeavor aims to enhance content recommendation algorithms, it has raised questions about the necessity and implications of such colossal models.
Meta, known for its extensive research in multimodal AI, combines data from various sources to gain a comprehensive understanding of content. Although they rarely release these models to the public, the company utilizes them internally to improve relevance and targeting.
In their explanation of expanding computational resources, Meta stated that their recommendation models could potentially reach trillions of parameters, significantly larger than any existing language model.
However, it is important to note that these massive models are currently theoretical rather than operational. Although Meta claims to aspire to build and efficiently deploy such models, they have not confirmed their active pursuit of this goal.
Nevertheless, the implications of this endeavor suggest that it is more than just an aspirational concept.The phrase “understand and model people’s preferences” refers to behavior analysis of users.
While an individual’s preferences could be expressed in a concise list, the problem space Meta aims to tackle is vast, encompassing billions of content pieces with associated metadata.
It is understandable, therefore, that a model trained on this extensive data would require considerable size. However, the claim of being “orders of magnitude larger” than existing models, even those trained on vast written works, is undeniably staggering.
Although the parameter count of GPT-4 is unknown, and it is acknowledged that performance cannot be solely measured by parameter size, ChatGPT stands at approximately 175 billion parameters. If Meta’s claims are even partially accurate, their proposed model would be of an unprecedented scale.
Consider the implications: an AI model equal to or larger than any created thus far. This model would ingest every action performed on Meta’s platforms, predicting users’ future actions and interests. The magnitude of this concept raises legitimate concerns about privacy and surveillance.
It is worth noting that Meta is not alone in this endeavor. TikTok, for instance, has already established itself as a frontrunner in algorithmic tracking and recommendation.
Meta’s attempt to impress advertisers with their AI capabilities, using technical jargon and emphasizing their understanding of users’ interests, is an effort to secure advertising revenue and maintain their status as an AI research leader.
In reality, users seldom receive direct queries about their preferences. Instead, platforms like Meta monitor user activities and serve ads based on inferred interests.
While the effectiveness of this approach is debatable, it serves as a pillar of the online advertising industry. Now, with the deployment of advanced AI technologies, companies seek to bolster the legitimacy and precision of ad targeting, especially as user skepticism and ad saturation increase.
The belief that a ten-trillion-parameter model is necessary to understand people’s preferences justifies the substantial investment in training such models. However, the true value and superiority of this approach compared to alternative methods remain uncertain.
The internet landscape has been built upon the assumption of precise ad targeting, and the latest technology aims to reinforce this belief in the face of a more critical marketing environment.