Discussion
Training Language Modelsvia Neural Cellular Automata
dzink: “The long-term vision is: foundation models that acquire reasoning from fully synthetic data, then learn semantics from a small, curated corpus of natural language. This would help us build models that reason without inheriting human biases from inception.”
qsera: I think this is a bit risky, because it assumes that all knowledge that a human posses about nature is acquired after birth.But is that correct? I think organisms also come with a partial built in understanding of nature at birth.
jamilton: I don’t think that assumption is being made, why do you think that? In terms of metaphor, training a model could be considered both knowledge acquired after birth and its evolution. But I don’t think it’s particularly useful to stay thinking in metaphors.