emnudge.dev
Markov chains are funnier than LLMs
I liked these thoughts, though I was disappointed the author doesn’t give more examples of funny Markov Chain output and unfunny ChatGPT output.
This bit is especially good:
Without a ton of “prompt engineering”, it’s actually quite easy to spot if some paragraph was LLM generated. It sounds soulless. It’s the most average thing you could have possibly produced given the context.
Asking an LLM for an “original thought” is almost oxymoronic, if not just moronic. It was built with the express purpose of not doing that.
I recently reworded the copy text on the landing page for Innerhelm. Most of it is written by hand, but there was part of the page I struggled with and used ChatGPT to generate suggestions for. The copy I ended up using for that part feels very LLM-ish to me. I wonder how noticeable it is to others.
Comments
0 comments
0 replies