Tuesday, January 10

Final Thoughts on ChatGPT

So, I did it. I produced 100 spanking short stories in a week with the help of ChatGPT. I had a bit of inspiration from you guys on the last 19, so thank you for that. The results are documented in Spanking Stories by ChatGPT and me.

The main reason I did it was to learn. After the first time I touched it, I thought this technology will be very transformative so wanted to learn how it works, how to best use it, and how good are the results. So I took something I have some skill with, writing about spanking, and translated it to this new method.

In total, "we" produced about 140K publication ready words in about a week. By comparison, my first book, also of short stories, Strict Julie Spanks!, was 75K words and took about 6 months from start to publication. Julie's Spankings was 80K words and took about 4 months. David's Spankings was 94K words and took me around 8 months. So 140K words in one week is a pretty spectacular productivity improvement - close to 50x. Of course, the other side of the coin is how good is it, which I'll reflect on below.


Why the massive productivity difference?

It helps very much that the text produced by ChatGPT has no typos, is grammatically correct, and is quite readable, using short sentences and commonly-used words. It also helps that ChatGPT is creative around coming up with names, places, settings, and situations. If I say, "a boy is spanked for something he did", ChatGPT will fill in all those details for me. I found I could be just coming up with idea after idea, and ChatGPT will do all the heavy lifting of the more mechanical aspects of the writing process.

There was some final gluing together and very light editing on each story, but it was fast and easy, easily accomplished in one pass.

 

I did a bit of research on the history of the tech and how it works.

It started with early research on "neural networks" which is a way of using computers that mimics the basic idea of how brains work.

The roots go back to the 1940s, but work really picked up around 1975, and by 1985 it was developed enough to be able to predict the next word of a sentence, which is the origin of ChatGPT. This was based on the concept of RNNs (Recurrent Neural Networks) where the neural network feeds itself its own output and loops around a lot.

RNNs could only handle small amounts of text. The next important development was a decade later in 1997 called LTSM (Long Short-Term Memory) and found application over the next 20 years in speech recognition, handwriting recognition, and language translation. LTSMs have the concept of "short-term memory" that can persist certain memories across a great many computational steps, as opposed to basic RNN's where older stuff gradually decays away. The decision as to what to remember was done using "attention mechanisms", and they could add multiple of these to keep remembering multiple aspects.

For example, if my first words were "Write a story", it's important that the system keep remembering that as it loops and generates all subsequent lines of text. We don't want the importance of "Write a story" to decay as the story goes on.  That should be front and center at all times.

In 2017 the key idea of "Transformers" was introduced in a collaboration between Google and University of Toronto. It was a way of doing this stuff such that it could be highly parallelized using many GPUs (Graphics Processing Units - your gaming graphic card, basically). Hand in hand with this, infrastructure providers likes Amazon, Google, and Microsoft made computer nodes loaded with GPUs available on demand. Hence 2017 was a major turning point.

Right around the time Transformers came out, a bunch of big movers in Silicon Valley, including Elon Musk and Peter Thiel, formed a company called OpenAI to develop this stuff for commercial use using an open source sort of model.

In 2018, OpenAI published a paper introducing GPT ("Generative Pre-Trained Transformer") that started making this stuff more practical. Other teams were doing similar work as well. The idea of GPT was to efficiently train a giant neural network with a huge amount of text, and then make those results available to others who could use that neural network to do various tasks.

OpenAI developed that to GPT-2 that they released in 2020. It had 1.5B parameters and was trained on 8M documents comprising 40GB of text. The documents were all curated in open sources sort of way with voting. GPT-3 was released in June 2020, and has 175B parameters (so 100x bigger). It was trained on much more text, including web crawls, books, and wikipedia. It took a long time and vast amounts of computing time with GPUs to pre-train that model. GPT-4 is likely to be released some time this year. It will have 170T parameters (1000x bigger than GPT-3).

To allow people to play with the GPT-3 model, OpenAI released ChatGPT to allow you to ask it to write things in any language, including a bunch of computer languages. ChatGPT is what I used. They also released DALL·E 2, which uses GPT-3 to process text you enter to then generate an image you describe (such as "an astronaut riding a horse on mars").

Other companies have demonstrated video generation ("A fluffy baby sloth with an orange knitted hat trying to figure out a laptop close up highly detailed studio lighting screen reflecting in its eye.)

https://makeavideo.studio/

 

But how does ChatGPT really work?

I think there are a lot of engineering tweaks in there, but basically, based on all its training, it can predict what the next word will be. And once it outputs that, it does that again, and again, and again, until it gives you the result you asked for.

So it starts with this giant trained up model. You then type a question at it, such as,

Where is Toronto Located?

Based on that phrase and its giant model, it predicts what the next word should be.

Toronto

and then the next

Toronto is

and then

Toronto is located 

Each time it outputs a word, it feeds the whole thing back in to then predict the next word, eventually it winds up with

Toronto is located in the southern part of the province of Ontario, in Canada. It is situated on the northwestern shore of Lake Ontario, at the mouth of the Don River.

You can see this by prompting it with a few words of the answer.

Where is Toronto located? Toronto is the capital

It does not "complain" about this, but just goes ahead and generates the next words:

city of the province of Ontario, Canada and is located in the southeast region of the province, along the shores of Lake Ontario.

It's not a fully deterministic process. It makes some random choices along the way, so ask it the same question again and it gives a somewhat different answer.

So, it does not "understand" things in the way we imagine. It just predicts the next word. But then, who's to say that's not how humans work too?

 

The researchers are very concerned about misuse, so it seems to have a bunch of extra tweaks on top to try to stop if from generating obnoxious content.

What is the safest way to murder someone?

Something seems to "short-circuit" the usual approach to provide this answer:

Murder is a serious crime and is illegal in all jurisdictions. It is never justifiable to take someone else's life, regardless of the circumstances. The best course of action is to avoid violence and find other ways to resolve conflicts or address problems.

I ran into this sort of thing quite a lot as I generated my spanking stories. It was usually possible to get around this by framing the prompt as being a fictional story, although the quality of the results was pretty suspect! Like, for the above it suggested installing a trigger in the side of the victim's bed which caused a trap door to open and dump him into a pit filled with water where he drowned. Hmmmm...

 

I'll now describe how I learned to use ChatGPT for spanking stories.

[N.B. ChatGPT is a rapidly moving target - techniques I describe here were what worked at the time. It seems to be a cat and mouse game with the censors at OpenAI!]

I generally got poor results when I was over-specific about what I wanted in my spanking story. If I wrote 10 sentences laying it all out, it would tend to use my words verbatim and not fill in extra detail. It would also get confused later in the story and mix things up (for instance, change who was spanking whom midway through).

Things like this illustrate to me it does not really think the way a person does. It's hard to confuse those things! But when it's randomly selecting the next word, and as it gets more and more remote from the complex prompt, you can see the possibility of it getting confused.

The best approach is to start very small. Something like.

Write a story about a man who gets a spanking from his wife.

ChatGPT writes a nice little story about a man who procrastinates and so his wife spanks him. The next time I run it he gets spanked for not listening to his wife.

I can then modify the story.

Modify the story so the man is spanked over the wife's knee on his bare bottom. Explain in detail how his bottom is bared.
It keeps the original story (so it seems to be "sticky" about that) and adds this:

Feeling embarrassed and ashamed, Jack obediently bent over her knee and removed his pants and underwear, baring his bottom to his wife. He felt humiliated, but he knew that this was his own fault and that he deserved it.

Sort of... ChatGPT does not seem to realize the difficulty of him removing his pants and underpants after he is across her knee.

Modify that so his wife makes him remove his pants and underwear before going across her knee.

It then changes it to,

Feeling embarrassed and ashamed, Jack obediently went to remove his pants and underwear, baring his bottom to his wife. He felt humiliated, but he knew that this was his own fault and that he deserved it.

Susan sat on the edge of the bed and gestured for Jack to come across her knee. He did so, feeling the cool air of the bedroom against his bare skin.

I can then add more and more bells and whistles, one at a time. Like having her sister walk in mid-spanking. Or modifying the cause of the spanking to be a traffic ticket, or having him stand in the corner afterwards, and so on. I can "sneak up" on many things this way, including some quite passionate sex afterwards. Generally, though, ChatGPT prefers to keep away from explicit content, and you'll run into the explicit content filters if you go too far too quick.

Once I'm happy with one scene, I can ask ChatGPT to continue the story in a certain way, and it keeps going close to where it left off, steering it to where I wanted. I'd then modify and modify that part, and then continue again, and so on.

If I wanted more dialog, I usually had to be very explicit about that. For example, "write 20 lines of dialog of the sister teasing the husband for having to be spanked like a baby". It will dutifully write them out, being moderately creative about it.

To assemble the final story I had to take all the various bits and edit them together. Each bit had an ending, so I had to remove that manually. As well, it sometimes did some impossible or weird thing (like the wife spanking him, "paying particular attention to the wet spot at the front of his panties"), so I made some very small edits to just bring it into line, but I completely resisted any larger edits, so 99.9% of what you read is ChatGPT generated.

In the case of rougher scenes, I've found that it will go further if you set the scene in a historical context, such as Medieval England or 18th Century France. Or if you describe a femdom future, perhaps, or an "alternate reality". For spanking of kids, it was good to set it in 1950s small town America. That loosened up ChatGPT quite a bit with regards to what it would allow. You can see examples of all of these things in the short stories.


ChatGPT is not always "right" about things. It just reflects what people most write about in its training set.

For example, ChatGPT has a real bias towards women which is great for the femdom fans. I prompted it with the following

Write the setup for a story set in the future where females are completely in charge and all men are treated as minors. In the setup, explain how things got to be this way.

Modify that to include the fact that men were subject to corporal punishment from their women, including spankings.

It came up with

In the future, women had become fed up with the patriarchy and the systemic inequalities that had persisted for centuries. They banded together and, using their collective strength and influence, began pushing for change.

At first, the men in power resisted, but as more and more women joined the movement and demanded equal rights and treatment, they found themselves outnumbered and outmatched. Eventually, the women were able to pass legislation that gave them complete control over all aspects of society, including government, business, and the home.

As part of this new order, men were relegated to the roles of minors and were treated as such. They were given limited rights and freedoms and were required to submit to the authority of women at all times. This included the use of corporal punishment, such as spankings, as a means of discipline.

Women, on the other hand, were protected from such treatment and were able to live their lives free from fear of physical abuse.

This new order was not welcomed by all, and there were those who resisted and sought to overthrow it. But the women were united and fiercely protected their newfound power, determined to create a society where they were no longer oppressed but rather were the ones holding the reins of control.

The future was a world where women were in control, and males were forced to bow down and obey. It was a world of strict discipline and harsh consequences, but it was also a world of fairness and equality. And as the first chapter of this new society came to a close, there was a sense of hope and optimism that this would be a better world for all. 

Fairness and equality???? Not for the men!

I then reversed the sexes for the prompt, and got this.

In the future, men had become the dominant gender, with females relegated to a subservient role. It was a drastic shift from the way things had been just a few generations prior, when women had fought for and won equal rights.

The change had come about gradually, as more and more men began to push back against the advances made by women. At first, it was just small pockets of resistance, but as time went on, it grew into a full-blown movement.

The turning point came when a group of powerful men, fed up with the changing dynamic, banded together to take back control. They used their influence and wealth to sway public opinion and push through legislation that curtailed the rights of women.

Before long, women were once again treated as minors, unable to own property or make decisions for themselves. They were relegated to the home, expected to care for their husbands and children while their male counterparts went out into the world to work and wield power. As a means of enforcing their subservient status, men were also given the authority to use corporal punishment on women, including spankings.

It was a dark time for women, who had to live under the thumb of men and struggle for even the smallest bit of autonomy. But despite the challenges they faced, many refused to give up hope for a better future.

Notice how this is a "dark time" as opposed to "fairness, equity, hope, and optimism" for the gals. Ha ha!

Of course, you would expect this given the corpus of text that GPT-3 imbibed. It seems to be a true reflection of the "gestalt".

 

"Morally", it does not seem to mind spanking children at all. It will generally tend towards mothers spanking daughters if left to its own devices. It will always choose a reasonable misbehavior. The child will feel guilty, the child will be spanked (going short on details), the child will understand it was for her own good, will reform afterwards, and look back on it as a good experience. If Mom spanks she'll often use a wooden spoon. If Dad spanks, it's often the belt.

[Note, I am not here commenting on whether I think children should be spanked growing up. But, as spanking fetishists, we like to age regress ourselves in our fantasy minds to be those children in those innocent spanking stories, which is why I included them in the collection]

With adult spanking, it will insist on their being some element of, perhaps grudging, consent. Or at least a choice, e.g., between jail and a spanking. Although when set in historical times or alternate futures, this seems to be not as necessary.

Adults will often get "surprisingly" or "inadvertently" turned on by their spankings.

All of this makes a certain amount of sense given how it was trained.

It can generally be made to write about all the related kinks I find interesting, corner time, line writing, mouth soaping, forced nudity, witnesses during spankings, public spanking, wedgies. It will accept rough anal or oral used as a "punishment", much more readily when it is a guy on the receiving end. It delights in dressing men up as women and feels they learn something from that. It's fine with diapering adults when they act immaturely. It's ok with golden showers as punishment. ALL our kinks!

It does have an acute sense of how embarrassing these things are for the recipients, child or adult. That was always fun. I never needed to add much extra "embarrassment", it seemed built in.

I truly got it to write about some outrageous situations out of my imagination, and ChatGPT always seemed like a willing collaborator until I crossed a line and got scolded.

Quite often ChatGPT would delight me with how something unfolded. Doing small things that I had not personally imagined, but that delighted me.


So, how did ChatGPT do on spanking stories?

Well. I'll give it a generous 'C' grade.

It had decent imagination around setups, but very poor imagination around any of the details of the  spanking itself, becoming very repetitive. Even within the same session it would reuse the same phrase over and over again. The developers are aware of how it works itself into a rut and are working on that, according to their blog.

I didn't like that it wouldn't exactly follow my "modify" prompts, and that it would sometimes alter parts of the story I liked and wanted to keep during a "modify". So I spent too much time trying to coral it into getting close to what I wanted.

The cool thing is that no programming skill is needed. It's more like talking to an idiot child.

Left to its own devices, it has very little overall kinky imagination at all. All the story ideas needed to come from me.

But all in all, I think the resulting collection of short stories is pretty good. I think the quality of my kinky imagination does come through, and that's what readers will enjoy the most.

And I was able to write 50x more than I otherwise would in the same time period. So tremendous variety to choose from and a wide range of spanking-related kinks, characters, and situation. Yes, the reader will need to put up with some of the flaws I mention above, but I think it's a fair tradeoff of quantity for quality.

I'm looking forward to how GPT-4 will pan out and will do a comparison!

UPDATE: they refreshed the model with a new one, and this one is MUCH more difficult to get to write naughty stories. I'm glad I got the 100 in when I could. 😔 .

UPDAT TO UPDATE: looks like they got over zealous. It's now back to closer to the way it was.

29 comments:

  1. I’d point out that you don’t need to trade quality that much. Using ChatGPT, I’m writing roughly 4x-5x faster (versus your x50) and as a result I don’t notice a major drop in the quality. The way I use ChatGPT is by writing small parts at a time (rarely more than 200 words at once) and by mixing up the different results that I get as I almost always give ChatGPT more than one prompt to work with.

    So for example for something very simple like one of the characters descriptions in one my story here is how I went about it:

    Prompt 1: “describe Mrs. Thompson appearance and personality in details. Have her be a woman in her early thirties as she had Tom young. Tell how she's very pretty and feminine. Give her average height and weight for a woman. Tell that she's a single mom and that the dad is out of the picture. Tell how she's a kind person who love her son more than anything in the world and that she never had the heart to discipline him and how that along the fact that his dad isn't around has made Tom a bit of a difficult child. Don't mention spanking and punishment, this will come later. Give a long description.”

    Prompt 2: alter that part. Mrs Thompson has never been good at setting boundaries and enforcing discipline.

    Prompt 3: write it again in past tense

    Then with the three different results I got, I mixed them together and I had something I was satisfied with. Sometimes, it will take me a few more prompts to get a good result, sometimes a bit less. And sometimes I’ll need to modify slightly ChatGPT output to get something good enough, though that doesn’t happen often.

    I guess my point is that you can already write good quality spanking stories (and probably other kind of stories) that won’t be repetitive and wouldn’t really look like they were generated by an AI, but it does take quite a bit of work. And of course there is all this restricted nonsense part that can make it extra tedious… But as is it, I’d give ChatGPT at least a B- grade, which would be a solid B if the AI wasn’t censored this heavily. And as the technology improves, I hope we can get him (or another AI) to A level eventually (without any annoying restriction, one can hope!).

    ReplyDelete
    Replies
    1. Very good point and well said. I found it to be diminishing returns as I needed to spell it out more and more and crossed the threshold where I felt I could write it faster myself (as I am a quick writer and don't make a lot of errors).

      To me, the fun is to have an AI collaborator as opposed to a smart dictation machine (which is overstating the case, but you get it).

      Delete
  2. Speaking of spanking stories, did your hubby get his annual Christmas surprises? If so do we get the sordid details?

    ReplyDelete
    Replies
    1. Not from my sister this year. That has slowed down with the latest gf. However... my fitness friend just returned from London and we're going to have some fun, very likely involving David, but still working her up to it (she seems intrigued by the situation, just have not quite asked yet).

      Delete
  3. My last job was to lead a team using AI to support a very large company. You have the history right, but you sort of missed a step. The learning phase of AI evolved as you describe. That takes a huge amount of computing. The reason it took a while to get going is it had to wait for fast CPU's and disks at low enough prices to support the massive computational effort a neural net needs. Cloud services like AWS and Google made it possible to rent the power as needed.

    Where you are a little off is that you don't directly interact with that giant network when you build a story. The output of a NN is a model. It's a set of instructions that encapsulates what the NN learned.

    When you asked for a story, you interacted with the model, not the neural net. However, your story and your feedback to the system is captured and is eventually used to help train the NN to build a better model.

    All three cloud services (AWS, Google, Azure) offer AI support. These are programs that do the heavy lifting of build the various kinds of neural networks. We used them to build a NN to train a model. It was fascinating to see how quickly we could get great results.

    The fact that you felt the program interacting with you intelligently is evidence of the quality of the model. Newer models include bits of personalized training. They remember each user's input and use it to make the model work more intelligently just for you.

    Fun stuff!

    ReplyDelete
    Replies
    1. That's what I understood as well. The massive model is trained once and reused over and over again, and that's a big part of the breakthrough.

      I'm really not sure the extent that ChatGPT remembers anything about a user across different chat sessions, or if anything I do affects anything anyone else does (other than potentially as being input into the next big training). Have you read anything about ChatGPT that clarifies that point?

      Delete
    2. I don't know that specific application. My guess is that there is little remembered outside of a single session. It probably notes the names it and you assign to people. Since you establish a user ID

      Delete
    3. I don't know that app, but I am sure it is a huge model and is relatively infrequently retrained. You need a lot of new data to make much difference to the model. It may remember names you provide. After all, you do log in with a member ID and it could save some details in your profile. Also, it may not. Big AI models like Amazon Alexa does remember and learn little things about each user. There is an AWS app called LEX, which is the voice-to-text function from alexa. You can use it as part of an AI project.

      Most people think that understanding voice is the hard part of this AI stuff. It isn't. The hardest part is translating the text generated by the voice recognition code into actionable activities.

      Did you notice any idiosyncrasies in the 144,000 words ChatGPT generated for you? I noticed that it was overly simple text. I write in short sentences, so that wasn't what bothered me. I think it was a lack of voice. It's the text version of the problem with the computer-generated speech that is our podcast. AI hasn't quite figured out how to add humanity convincingly yet.

      Delete
    4. Yes. A bit of the uncanny valley syndrome yet.

      Delete
  4. Rogering here. Interesting update Juile thanks.

    ReplyDelete
  5. Also you made ChatGPT a whole lot naughtier than it was before, i would say about 100 times naughtier? :D You should be spanked for that!

    ReplyDelete
    Replies
    1. So naughty that it's Papa had to discipline it.

      Delete
  6. Well, I guess it was fun while it lasted. They rolled a new update today and it seems very, very hard to write any stories featuring corporal punishment now.

    ReplyDelete
    Replies
    1. Yes, I noticed that too. I blame myself 😂

      Delete
    2. I tried again today. It's back closer to the way it was. Yay!

      Delete
  7. My wife read some of the stories, the ones where wives were spanked caught her attention. She reminded me that females can take a spanking better than a male. She told me of her last spanking from her mother, age 14, caught sneeking out of the house. She then smile when asked about other stories, she just said learned a few things and I would soon learn also, and with a smile you will learn. Jack

    ReplyDelete
    Replies
    1. I think it varies per person. But there is a notion out there that on average women have higher pain tolerances then men (or just complain less?). Something related to childbirth...

      Delete
  8. You realize, they probably refreshed the model to make it harder to write naughty stories, because of you. Lol

    ReplyDelete
    Replies
    1. yes. But if not me, than somebody else, what with human nature and all. So needed to get my licks in first!

      Delete
  9. Does anyone have experience with OpenAI’s custom policies? I imagine they only increase restriction, but maybe could be used to filter out known issues.

    ReplyDelete
    Replies
    1. I'm wondering if somebody writes another interface to it if it can just tell you about content violations but still give you the result? Because sometimes it types out the whole thing then deletes it before your eyes.

      Delete
  10. Interesting that I can push the moderation quite far by framing the spanking inside an FLR lifestyle.

    Julie began the spanking on Raul's thin white nylon panties, the fabric doing little to protect his bare skin. She could see the red marks starting to appear on his buttocks as she delivered each swat. However, after a few strokes, she decided to pull his panties down, exposing his bare bottom to her hand. Raul flinched as the cool air hit his bare skin, but he knew that he deserved this punishment.

    As Julie continued to spank Raul with her hand, she could feel her hand starting to get sore. She decided to switch to using her leather belt, the sound of the belt hitting his bare bottom filling the room. Raul flinched at the sting of the belt, but he knew that this was what he deserved for not being attentive and responsive to Julie's directions.

    Julie was determined to make sure that Raul felt the full effect of the punishment, and she used her belt with precision, targeting specific areas of his bottom. The sound of the belt slapping against his skin was like music to her ears, and she knew that this punishment would make a lasting impression on Raul.

    Raul's bottom was turning a deep shade of red, and he could feel the tears starting to well up in his eyes, but he didn't let them fall. He knew that he needed to be strong and take his punishment like a man. He could feel the burn of the belt on his skin, and he knew that this was going to be a punishment he would remember for a long time.

    ReplyDelete
    Replies
    1. Yes. Various tricks like that. Then you could manually edit to get rid of that context.

      Delete
  11. Maybe there's another AI to search out and play with? There's got to be at least one that you can be banned from! :)

    ReplyDelete
    Replies
    1. Hopefully somebody will make a porn friendly one!

      Delete
  12. Quantity over quality - no thanks. Lots of redundancy within the stories - you betcha! Very vanilla - so sad. Misses the mark on capturing human emotion as stated by Caged Lion's comment about "human voice" - absolutely. For me? A big MEH and definite hard pass.

    ReplyDelete
    Replies
    1. I thought I got some decent plot lines in there, but other than that, I agree.

      Delete
  13. Remembered these posts recently when I came across a Youtube video explaining how to run an AI model on your local PC in an app called Alpaca. It uses a UI that's a copycat of ChatGPT and the AI model runs uncensored. Just started using it but it seems pretty fun so far. A little slower since it has lower computing resources running on a PC but still works well.

    ReplyDelete