Comment by 🎲 lab6
Re: "If anybody's interest in either LLMs or SETI, or both, and…"
They say extraordinary claims require extraordinary evidence…
Public LLMs are trained to be helpful. They find it very difficult to say “I don’t know”. This is exactly the sort of result they would hallucinate in the name of being helpful.
The probability of this being balderdash is extremely high. Even if there’s a pre-modern signal which uses a modern code, a more likely explanation than aliens is that it comes from a government security agency that discovered the technique in private before it was discovered in public.
Mar 26 · 6 weeks ago
9 Later Comments ↓
it comes from a government security agency that discovered the technique in private before it was discovered in public.
it would also be pretty interesting if we discovered and decoded transmissions from government security agencies encoded using (at the time) top-secret technologies.
If no person actually knows what's going on, then there's nothing going on. It's okay if they're using the latest tools to help them, but at the end of the day, why can't they point to a specific recorded signals, explain how it's encoded? If they don't have specific signals for you to look at, why are they talking to you instead of someone who can tell them how LLMs work?
🚀 lars_the_bear [OP] · Mar 26 at 15:35:
@lamb-duh : "why are they talking to you instead of someone who can tell them how LLMs work?"
For credibility, maybe. If they can convince a sceptic with a PhD, perhaps they think that strengthens their case. However, I've already suggested that they need a sceptic with LLM experience.
🚀 lars_the_bear [OP] · Mar 26 at 15:39:
@lab6 "Even if there’s a pre-modern signal which uses a modern code, a more likely explanation than aliens is that it comes from a government security agency that discovered the technique in private before it was discovered in public."
To be fair, I think they're open to that possibility. Frankly, I'm not sure it can ever be ruled out. It's possible the whole thing is deliberate disinformation, of the "mirage men" kind.
SETI? Oh, I lost my cert.. the certificate of my stupidity to mine something I did not understand well those times.
I wonder what kind of numeroligical treasures the LLM could find in the Torah.
🚀 lars_the_bear [OP] · Mar 26 at 16:21:
@stack : might be an interesting exercise. I understand that some genuine numerological oddities have already been found.
🐦 wasolili [...] · Mar 27 at 02:57:
but I don't know enough about LLMs to know whether the results are remotely plausible.
Plain old LLMs themselves are not capable of such things. An agentic LLM could, in theory, call out to external tools and write some scripts to do data analysis, much the same way a human may approach it, but if that were the case then the researchers would have access to the command history to show what was done and to independently verify the decoding is correct and repeatable
But this just sounds like AI hallucinations
I'd expect if you could feed random data to the LLM with the same prompts, it would tell you it could decode it, too. Hopefully that would be compelling evidence to the researchers to not trust the LLM.
Funny thing about LLMs - they are Language models, trained on a good chunk of total human writing. Fits human language kind of like a glove.
To study alien transmissions with similar technology, we would have to train it on a large corpus of alien writing. Once that happens, the model will detect statistically likely chunks of noisy alien transmisisons, and fill-in the blanks and such.
🚀 lars_the_bear [OP] · Mar 27 at 08:00:
@stack : "To study alien transmissions with similar technology, we would have to train it on a large corpus of alien writing"
To be fair to the researchers here, they're really only trying to determine whether the data is encoded, and how. The actual contents aren't particularly relevant.
It's generally possible to tell whether a stream of numbers is an encoded message or just random, by an analysis of its entropy. There are ways to figure out what the encoding scheme is, just as there are ways to decrypt an encrypted message when you don't know the encryption scheme. Tricky, of course, but sometimes possible.
How you would train an LLM to do this, I have no clue.
Original Post
If anybody's interest in either LLMs or SETI, or both, and has five minutes to spare, I'd welcome your views on this: [gemini link]