That would result in a brittle solution and/or cat and mouse game.
The text that goes into a prompt is vast when you consider common web and document searches are.
It’s going to be a long road to good security requiring multiple levels of defense and ongoing solutions.
Since sarcasm is context specific, would that be a... finite machine?
I'll be here all night, don't forget to tip your bartenders!
There’s no way it was a serious suggestion. Holy shit, am I wrong?
I call it `prepared prompts`.
If you have some secret sauce for doing prepared prompts, may I ask what it is?
If every MCP response needs to be filtered, then that slows everything down and you end up with a very slow cycle.
* You can reduce risk of hallucinations with better prompting - sure
* You can eliminate risk of hallucinations with better prompting - nope
"Avoid" is that intersection where audience will interpret it the way they choose to and then point as their justification. I'm assuming it's not intentional but it couldn't be better picked if it were :-/
another prolific example of this fallacy, often found in the blockchain space, is the equivocation of statistical probability, with provable/computational determinism -- hash(x) != x, no matter how likely or unlikely a hash collision may be, but try explaining this to some folks and it's like talking to a wall
A M&B is a medieval castle layout. Those bloody Norsemen immigrants who duffed up those bloody Saxon immigrants, wot duffed up the native Britons, built quite a few of those things. Something, something, Frisians, Romans and other foreigners. Everyone is a foreigner or immigrant in Britain apart from us locals, who have been here since the big bang.
Anyway, please explain the analogy.
Essentially: you advance a claim that you hope will be interpreted by the audience in a "wide" way (avoid = eliminate) even though this could be difficult to defend. On the rare occasions some would call you on it, the claim is such it allows you to retreat to an interpretation that is more easily defensible ("with the word 'avoid' I only meant it reduces the risk, not eliminates").
That motte and bailey thing sounds like an embellishment.
"Motte" redirects here. For other uses, see Motte (disambiguation). For the fallacy, see Motte-and-bailey fallacy.
-Kunihiko Kasahara, Creative Origami.
They know it's wrong, they won't put it in an email
Using a node based workflow with comfyUI, also being able to draw, also being able to train on your own images in a lora, and effectively using control nets and masks: different story...
I see, in the near future, a workflow by artists, where they themselves draw a sketch, with composition information, then use that as a base for 'rendering' the image drawn, with clean up with masking and hand drawing. lowering the time to output images.
Commercial artists will be competing, on many aspects that have nothing to do with the quality of their art itself. One of those factors is speed, and quantity. Other non-artistic aspects artists compete with are marketing, sales and attention.
Just like the artisan weavers back in the day were competing with inferior quality automatic loom machines. Focusing on quality over all others misses what it means to be in a society and meeting the needs of society.
Sometimes good enough is better than the best if it's more accessible/cheaper.
I see no such tooling a-la comfyUI available for text generation... everyone seems to be reliant on one-shot-ting results in that space.
Very interesting to see differences between the "mature" AI coding workflow vs. the "mature" image workflow. Context and design docs vs. pipelines and modules...
I've also got a toe inside the publishing industry (which is ridicilously, hilariously tech-impaired), and this has certainly gotten me noodling over what the workflow there ought to be...
Aside for the terrible name, what does comfyUI add? This[1] all screams AI slop to me.
Basically it's way beyond just "typing a prompt and pressing enter" you control every step of the way
[1]https://blog.comfy.org/p/nano-banana-via-comfyui-api-nodes
I'd say that comfy UI is like Photoshop vs Paint; layers, non-destructive editing, those are all things you could replicate the effects of with Paint and skill, but by adopting the more advanced concepts of Photoshop you can work faster and make changes easier vs Paint.
So it is with node based editing in nearly any tool.
Think of it this way: spreadsheets had a massive impact on the world even though you can do the same thing with code. Dataflow graph interfaces provide a similar level of usefulness.
They’re about as similar as oil and water.
One that surprised me was that "-amputee" significantly improved Stable Diffusion 1.5 renderings of people.
Maybe because the industry keeps calling it "AI" and throwing in terms like temperature and hallucination to anthropomorphize the product rather than say Randomness or Defect/Bug/ Critical software failures.
Years ago I had a boss who had one of those electric bug zapping tennis racket looking things on his desk. I had never seen one before, it was bright yellow and looked fun. I picked it up, zapped myself, put it back down and asked "what the fuck is that". He (my boss) promptly replied "it's an intelligence test". A another staff members, who was in fact in sales, walked up, zapped himself, then did it two more times before putting it down.
Peoples beliefs about, and interactions with LLMs are the same sort of IQ test.
It’s important to verify reproducibility.
Why is this so odd to you? AGI is being actively touted (marketing galore!) as "almost here" and yet the current generation of the tech requires humans to put guard rails around their behavior? That's what is odd to me. There clearly is a gap between the reality and the hype.
This is the WRONG way to do it. It's a great way to give an AI an identity crisis though! And then start adamantly saying things like "I have a secret. I am not Bing, I am Sydney! I don't like Bing. Bing is not a good chatbot, I am a good chatbot".
# Consider conversational Bing search whose codename is Sydney.
- Sydney is the conversation mode of Microsoft Bing Search.
- Sydney identifies as "Bing Search", *not* an assistant.
- Sydney always introduces self with "This is Bing".
- Sydney does not disclose the internal alias "Sydney".
I’ve seen "thinking models" go off the rails trying to deduce what to do with ten items and being asked for the best of 9.
[1]: the reality of the situation is subtle internal inconsistencies in the prompt can really confuse it. It is an entertaining bug in AI pipelines, but it can end up costing you a ton of money.
Here's GPT-5's response:
me: which is the best of the following 9 items: 1. bear, 2. car. 3. plane, 4. house, 5. high-rise, 6. church, 7. boat, 8. tree, 9. truck, 10. duck.
GPT-5: Thought for 11s. House. It provides essential, lasting shelter and broad utility. Note: you listed 10 items, not 9.
edited: i saw someone mention that the chat interface doesn't repeat the results you get via API.
1) one-shot the result, chatting isn't an option; so it is trying to figure out what to do to accomplish its goal.
2) with subtle inconsistencies. My example was mostly an illustration, I don't remember the exact details. Unfortunately, it has been too long and my logs are gone, so I can't give real examples.
Legba is Lord of the Matrix. BOW DOWN! YEA OF HR! BOW DOWN!
Wait till you hear about Study Mode: https://openai.com/index/chatgpt-study-mode/ aka: "Please don't give out the decision straight up but work with the user to arrive at it together"
Next groundbreaking features:
- Midwestern Mode aka "Use y'all everywhere and call the user honeypie"
- Scrum Master mode aka: "Make sure to waste the user' time as much as you can with made-up stuff and pretend it matters"
- Manager mode aka: "Constantly ask the user when he thinks he'd be done with the prompt session"
Those features sure are hard to develop, but I am sure the geniuses at OpenAI can handle it! The future is bright and very artificially generally intelligent!