Comment by ❄ freezr

Re: "My professional experience with Gemini, the soulless LLM,…"

In: u/darkghost

GenAI is the newest "Elixir of youth" or more realistically one of the biggest fraud/scam ever made.

The capitalism's wet dream to remove the infamous workers for ever, push anybody to inject billions and billions of dollars. The reality is that nobody has a real use of it, everything it does it is inaccurate, false, wrong.

In other circumstances it would be screamed as a shameful scandal but the investments were, and are, so huge that let this fail is not possible.

However there are so many points of failure that is destined to crash down not matter what.

It is a matter of time...

❄ freezr

2025-06-10 · 11 months ago

17 Later Comments ↓

🚀 stack · 2025-06-11 at 01:20:

I've never tried Gemini as I avoid giving google any extra data. But I use ChatGPT quite a bit.

It is pretty useful as long as you already have a pretty good grasp of the subject (to spot nonsense), and could use an unreliable but extremely well-informed assistant. The more you already know, the more useful it is. Also do not expect it to reason well.

I use it to help me code a lot. Not to generate code, but to help me. Stuff that would take me longer to look up. APIs, vim key sequences to do oddball things, etc. Early on I asked it to write the boilerplate for a domain socket server, and it was helpful but it was stupidly verbose and missed a crucial call. Were I not an expert, it would never have worked. But it's really great at little stuff, like format strings and stupid library calls I forget.

A lot of code it ingested seems to be student assignments from the University of Mumbai.

I use it as a language tutor, which it excels at, being a language model. I only caught one major error. It delivers solid explanations, examples and subtleties of usage in different countries.

I think it is rarely wrong with language advice.

I laugh when people worry that it will take over the world. It's a really advanced spell-checker you can talk to which sometimes lies. I also laugh about trillion-dollar AI companies and all the power plants being built to 'get ready for AI'. DeepHole on the desktop will take care of that.

☕️ Morgan · 2025-06-11 at 05:48:

I use Gemini's suggest-code-as-you-type in VSCode, it's great. It often figures out what I'm doing and suggests the next bit correctly, and doesn't get in the way.

From time to time I ask it to write a chunk of--complex--code. So far the results have been impressive, but not very useful. Often the code more or less works, but takes a lot of effort to understand and clean up. Other times it leads me on the wrong path.

For throwaway code it's probably already quite useful, but the code I'm working on right now needs to be rock solid and I need to fully understand it as I'll be on the hook for maintaining it :)

👻 darkghost [OP] · 2025-06-11 at 10:43:

Sounds like it's helpful for this type of coding. Would anybody use these outputs to code any part of a piece of software responsible for controlling let's say a hospital instrument responsible for keeping a human being alive? How about managing a nuclear reactor?

That's the part that worries me deeply. I work with data sometimes used to the determine medical safety of things. Bad data kills. So does bad analysis. (Also, Gemini is not set up to handle patient data, but I didn't do this kind of analysis on it.) Being knowledgeable already means I can do these things and do them better.

🚀 clarahd · 2025-06-11 at 15:32:

Giving it small chunks of a programming problem, it giving proper attribution and notice of licensing - sounds like an efficiency aid.

Giving it a large complex problem to solve - besides not getting a reliable solution nor attribution of credit for the inputs - how can you claim mastery of the problem? It takes more than just vetting the correctness of what AI spews out - in your own solution you would have presumably weighed the suitability and interrelationships of the components.

It reminds me of our government getting rid of most IT expertise and then outsourcing a large project to an outside contractor. They lose granular control over projects, don't develop expertise, and get many unplanned consequences.

I think learning languages would be less of an issue since the information is in the public domain and no creative input is required to e.g. translate a phrase.

🚀 clarahd · 2025-06-11 at 15:41:

Also, the billions in public money assigned to AI is an example of why reliable public forums are necessary for collective management of collective assets - otherwise, hands off the assets!

This reminds me of the billions (and lives) wasted on fake war pretexts. A reliable public debate would expose the policy to various sources of expertise and contrary evidence.

❄ freezr · 2025-06-11 at 17:25:

@stack you use it as an enhanced web search, are you paid customer? If not, how much are you willing to pay for continuing to use your favorite LLM?

I laugh when people worry that it will take over the world.

Unfortunately it is what many CEOs expect from those services, fire 9 people and having the tenth working as nine because GenAI...

🚀 stack · 2025-06-11 at 21:19:

It can definitely be called a tool. Like all tools it can be used to improve one's life or commit murder -- that is up to the user.

In some cases, replacing people with a well configured AI may be a great improvement. Next time my Internet connection needs to be checked or reset by the ISP I'd much rarher talk to an AI than some guy in India who will read the script to me for fifteen minutes after an hour of hold time.

If you've ever spent a few days at a Social Security office you will beg to have access to an AI. A [derogatory expletives removed] who finally sees you is much stupider and less informed, and is more prone to hallucinations, diabetes, and 'losing' your paperwork. Bring it on.

I seriously doubt that any AI today can generate a working application, or even a small component, without a really competent coder prompting, tes\ting and finetuning a lot. Much like those lawyers found out about AI's ability to research case law! There is way too much nonsense, and the effort to verify and clean-up code is exponentially related to its size. Thus small demos and snippets are almost magic, but digging through 10,000 lines of slightly wrong AI-generated code may take longer than writing it, especially with an AI assistant. A million lines? No way.

Not to mention, figuring out datastructures and logic, as well as user interface issues will need to be done by people.

What everyone seems to miss is that writing prompts in English that generate desired code does not work that well. That is what programming languages are for -- to express very complicated things unambiguously. Writing detailed instructions that result in computers doing things we want -- is called programming.

🦋 CarloMonte · 2025-06-12 at 08:16:

My private experience: I won't give names, as the ranking might change, but : another LLM might produce better results ; try them all. Locally hosted models can be an alternative, too. I use LLMs mostly for two use cases : review/correct text and as a private research assistant (answer questions at an abstract level). They are OK but not perfect for both. What bothers me is the nanny censor (here the differences between the products are huge) and the assumptions that I use Python, that I need them to code for me and that their code matches the interfaces in my projects (all wrong).

🦁 Houjimmy · 2025-06-14 at 18:45:

Well, I have used a bunch of LLMs from local to into web. Now I am subscribing the ChatGPT.

I work in industry as a blue collar supervisor, and I uploaded more or less 20 books of engineering and manuals into a personalized GPT. It is helping me A LOT giving me information and teaching me theory and other aspects of my profession I don't completely domain or skipped through my classes and helping me avoid errors that can cost thousands, if not millions of damage. Not even paying attention to how many lifes could have been hurt or lost.

So, I agree AI will take some jobs, a lot of them maybe, but It can also help them a lot if they are open to new possibilities.

☕️ Morgan · 2025-06-15 at 08:34:

Ehm. I hope "how many lives could have been lost" was a joke, LLMs can be wrong and at the same time very convincing.

🦁 Houjimmy · 2025-06-15 at 10:01:

Bc I don't have all day to look for into all the engineering books and docs, he searches for me and then I go check.

Dimensioning pipes, valves, actuators and and meters kind of material an etc can indeed be tricky and progress to lost of lifes.

Is my assistant, but I am not dumb to not check its information. I am an engineer, damn, not a programmer. I have CRIMINAL RESPONSABILITY over my work, I will never outsource it to an AI. But if I want to check a equation that I don't remember anymore, or a norm of the ISA or ISO i can go after this, or check if (according to my own materials) there is another better and more adequate technology to implement instead, It helps a lot.

Also, I always request it to citate the source and if possilble the chapter and page. If I check everything is OK then I proceed to do my job.

❄ freezr · 2025-06-15 at 22:55:

Eventually the main use is as a database that you can interact with human natural language, I don't believe this will hurt anybody...

I guess the only limitation is how much will be the real price will be...

🚀 stack · 2025-06-16 at 19:39:

The utility, cost and the complexity of training and operating language models is grossly exagerrated, in order for liars to raise valuations into trillions. As deepseek showed, even a moderate amount of optimization reduces the costs by orders of magnitude.

It is already feasible to run these locally, and very soon it will be cheap to do so.

🦋 CarloMonte · 2025-06-17 at 08:59:

run yes train no

😎 decant · 2025-06-19 at 03:20:

I think as of now, based on my own experience, AI is good for a few tasks, doing real research is not one of them. It is especially dangerous if you use AI to break in to a new knowledge domain you have little experience before. You will get syntactically correct paragraphs dosed with hallucination.

I hope for a fallout game with unlimited AI generated map and storyline, dreaming up this kind of over the top fantasy world is ideal for the current generation of hallucination ridden LLM.

👻 darkghost [OP] · 2025-06-19 at 06:47:

Love fallout but I can see the AI dreaming up level deigns that require glitches to solve. Never change, Bethesda!

🦔 bsj38381 · 2025-10-08 at 19:12:

I personally wouldn't touch LLM Ai stuff with a 10 foot pole now, it used to seem cool to me, now, it's being used to spam garbage to the main internet. (I'm also a little annoyed that ai bros use llm image generators to send videos of dead people doing random stuff to loved ones, Zelda Williams for example. But that's a different story) I also delete Google Gemini off of my phone too, and I find the ads for Google Gemini pretty insulting to the highest degree as well too. But I'll stop being a debby downer.

Original Post

👻 darkghost

My professional experience with Gemini, the soulless LLM, not the protocol. — I do science type stuff. It's a living, for now at least. My employer encourages us to make use of Google Gemini. So I've been playing around with it. Here are my experiences: I asked Gemini to summarize journal articles I've written. Gemini gets the basic details wrong and then butchers the conclusions. Maybe that's too esoteric. So I bring up an old data set I've analyzed before and ask it to analyze it for me. I...

💬 19 comments · 6 likes · 2025-06-10 · 11 months ago