Meteen naar de inhoud

Internal document: Google trained PaLM 2 on 3.6T tokens and 340B parameters, compared to 780B tokens and 540B parameters for the original PaLM in 2022 (Jennifer Elias/CNBC) 17-05-2023

Jennifer Elias / CNBC:
Internal document: Google trained PaLM 2 on 3.6T tokens and 340B parameters, compared to 780B tokens and 540B parameters for the original PaLM in 2022  —  – Google’s PaLM 2 large language model is using nearly five times the amount of text data for training as its predecessor LLM, CNBC has learned.


Lees verder op Tech Meme