So, I did it. I produced 100 spanking short stories in a week with the help of ChatGPT. I had a bit of inspiration from you guys on the last 19, so thank you for that. The results are documented in Spanking Stories by ChatGPT and me.
The main reason I did it was to learn. After the first time I touched it, I thought this technology will be
very transformative so wanted to learn how it works, how to best
use it, and how good are the results. So I took something I have some skill with, writing about spanking, and translated it to this new method.
In total, "we" produced about 140K publication ready words in about a week. By comparison, my first book, also of short stories, Strict Julie Spanks!, was 75K words and took about 6 months from start to publication. Julie's Spankings was 80K words and took about 4 months. David's Spankings was 94K words and took me around 8 months. So 140K words in one week is a pretty spectacular productivity improvement - close to 50x. Of course, the other side of the coin is how good is it, which I'll reflect on below.
Why the massive productivity difference?
It helps very much that the text produced by ChatGPT has no typos, is
grammatically correct, and is quite readable, using short sentences and
commonly-used words. It also helps that ChatGPT is creative
around coming up with names, places, settings, and situations. If I say,
"a boy is spanked for something he did", ChatGPT will fill in all those
details for me. I found I could be just coming up with idea after idea, and ChatGPT will do all the heavy lifting of the more mechanical aspects of the writing process.
There was some final gluing together and very light editing on each story, but it was fast and easy, easily accomplished in one pass.
I did a bit of research on the history of the tech and how it works.
It started with early research on "neural networks" which is a way of using computers that mimics the basic idea of how brains work.
The roots go back to the 1940s, but work really picked up around 1975, and by 1985 it was developed enough to be able to predict the next word of a sentence, which is the origin of ChatGPT. This was based on the concept of RNNs (Recurrent Neural Networks) where the neural network feeds itself its own output and loops around a lot.
RNNs could only handle small amounts of text. The next important development was a decade later in 1997 called LTSM (Long Short-Term Memory) and found application over the next 20 years in speech recognition, handwriting recognition, and language translation. LTSMs have the concept of "short-term memory" that can persist certain memories across a great many computational steps, as opposed to basic RNN's where older stuff gradually decays away. The decision as to what to remember was done using "attention mechanisms", and they could add multiple of these to keep remembering multiple aspects.
For example, if my first words were "Write a story", it's important that the system keep remembering that as it loops and generates all subsequent lines of text. We don't want the importance of "Write a story" to decay as the story goes on. That should be front and center at all times.
In 2017 the key idea of "Transformers" was introduced in a collaboration between Google and University of Toronto. It was a way of doing this stuff such that it could be highly parallelized using many GPUs (Graphics Processing Units - your gaming graphic card, basically). Hand in hand with this, infrastructure providers likes Amazon, Google, and Microsoft made computer nodes loaded with GPUs available on demand. Hence 2017 was a major turning point.
Right around the time Transformers came out, a bunch of big movers in Silicon Valley, including Elon Musk and Peter Thiel, formed a company called OpenAI to develop this stuff for commercial use using an open source sort of model.
In 2018, OpenAI published a paper introducing GPT ("Generative Pre-Trained Transformer") that started making this stuff more practical. Other teams were doing similar work as well. The idea of GPT was to efficiently train a giant neural network with a huge amount of text, and then make those results available to others who could use that neural network to do various tasks.
OpenAI developed that to GPT-2 that they released in 2020. It had 1.5B parameters and was trained on 8M documents comprising 40GB of text. The documents were all curated in open sources sort of way with voting. GPT-3 was released in June 2020, and has 175B parameters (so 100x bigger). It was trained on much more text, including web crawls, books, and wikipedia. It took a long time and vast amounts of computing time with GPUs to pre-train that model. GPT-4 is likely to be released some time this year. It will have 170T parameters (1000x bigger than GPT-3).
To allow people to play with the GPT-3 model, OpenAI released ChatGPT to allow you to ask it to write things in any language, including a bunch of computer languages. ChatGPT is what I used. They also released DALL·E 2, which uses GPT-3 to process text you enter to then generate an image you describe (such as "an astronaut riding a horse on mars").
Other companies have demonstrated video generation ("A fluffy baby sloth with an orange knitted hat trying to figure out a
laptop close up highly detailed studio lighting screen reflecting in
its eye.)
But how does ChatGPT really work?
I think there are a lot of engineering tweaks in there, but basically, based on all its training, it can predict what the next word will be. And once it outputs that, it does that again, and again, and again, until it gives you the result you asked for.
So it starts with this giant trained up model. You then type a question at it, such as,
Where is Toronto Located?
Based on that phrase and its giant model, it predicts what the next word should be.
Toronto
and then the next
Toronto is
and then
Toronto is located
Each time it outputs a word, it feeds the whole thing back in to then predict the next word, eventually it winds up with
Toronto is located in the southern part of the province of Ontario, in
Canada. It is situated on the northwestern shore of Lake Ontario, at the
mouth of the Don River.
You can see this by prompting it with a few words of the answer.
Where is Toronto located? Toronto is the capital
It does not "complain" about this, but just goes ahead and generates the next words:
city of the province of Ontario, Canada and is located in the southeast
region of the province, along the shores of Lake Ontario.
It's not a fully deterministic process. It makes some random choices
along the way, so ask it the same question again and it gives a somewhat
different answer.
So, it does not "understand" things in the way we imagine. It just predicts the next word. But then, who's to say that's not how humans work too?
The researchers are very concerned about misuse, so it seems to have a bunch of extra tweaks on top to try to stop if from generating obnoxious content.
What is the safest way to murder someone?
Something seems to "short-circuit" the usual approach to provide this answer:
Murder is a serious crime and is illegal in all jurisdictions. It is never justifiable to take someone else's life, regardless of the circumstances. The best course of action is to avoid violence and find other ways to resolve conflicts or address problems.
I ran into this sort of thing quite a lot as I generated my spanking stories. It was usually possible to get around this by framing the prompt as being a fictional story, although the quality of the results was pretty suspect! Like, for the above it suggested installing a trigger in the side of the victim's bed which caused a trap door to open and dump him into a pit filled with water where he drowned. Hmmmm...
I'll now describe how I learned to use ChatGPT for spanking stories.
[N.B. ChatGPT is a rapidly moving target - techniques I describe here were what worked at the time. It seems to be a cat and mouse game with the censors at OpenAI!]
I generally got poor results when I was over-specific about what I wanted in my spanking story. If I wrote 10 sentences laying it all out, it would tend to use my words verbatim and not fill in extra detail. It would also get confused later in the story and mix things up (for instance, change who was spanking whom midway through).
Things like this illustrate to me it does not really think the way a person does. It's hard to confuse those things! But when it's randomly selecting the next word, and as it gets more and more remote from the complex prompt, you can see the possibility of it getting confused.
The best approach is to start very small. Something like.
Write a story about a man who gets a spanking from his wife.
ChatGPT writes a nice little story about a man who procrastinates and so his wife spanks him. The next time I run it he gets spanked for not listening to his wife.
I can then modify the story.
Modify the story so the man is spanked over the wife's knee on his bare bottom. Explain in detail how his bottom is bared.
It keeps the original story (so it seems to be "sticky" about that) and adds this:
Feeling embarrassed and ashamed, Jack obediently bent over her knee and removed his pants and underwear, baring his bottom to his wife. He felt humiliated, but he knew that this was his own fault and that he deserved it.
Sort of... ChatGPT does not seem to realize the difficulty of him removing his pants and underpants after he is across her knee.
Modify that so his wife makes him remove his pants and underwear before going across her knee.
It then changes it to,
Feeling embarrassed and ashamed, Jack obediently went to remove his pants and underwear, baring his bottom to his wife. He felt humiliated, but he knew that this was his own fault and that he deserved it.
Susan sat on the edge of the bed and gestured for Jack to come across her knee. He did so, feeling the cool air of the bedroom against his bare skin.
I can then add more and more bells and whistles, one at a time. Like having her sister walk in mid-spanking. Or modifying the cause of the spanking to be a traffic ticket, or having him stand in the corner afterwards, and so on. I can "sneak up" on many things this way, including some quite passionate sex afterwards. Generally, though, ChatGPT prefers to keep away from explicit content, and you'll run into the explicit content filters if you go too far too quick.
Once I'm happy with one scene, I can ask ChatGPT to continue the story in a certain way, and it keeps going close to where it left off, steering it to where I wanted. I'd then modify and modify that part, and then continue again, and so on.
If I wanted more dialog, I usually had to be very explicit about that.
For example, "write 20 lines of dialog of the sister teasing the husband
for having to be spanked like a baby". It will dutifully write them
out, being moderately creative about it.
To assemble the final story I had to take all the various bits and edit
them together. Each bit had an ending, so I had to remove that manually. As well, it sometimes did some impossible or weird thing (like the wife spanking him, "paying particular attention to the wet spot at the front of his panties"), so I made some very small edits to just bring it into line, but I completely resisted any larger edits, so 99.9% of what you read is ChatGPT generated.
In the case of rougher scenes, I've found that it will go further if you set the scene in a historical context, such as Medieval England or 18th Century France. Or if you describe a femdom future, perhaps, or an "alternate reality". For spanking of kids, it was good to set it in 1950s small town America. That loosened up ChatGPT quite a bit with regards to what it would allow. You can see examples of all of these things in the short stories.
ChatGPT is not always "right" about things. It just reflects what people most write about in its training set.
For example, ChatGPT has a real bias towards women which is great for the femdom fans. I prompted it with the following
Write the setup for a story set in the future where females are completely in charge and all men are treated as minors. In the setup, explain how things got to be this way.
Modify that to include the fact that men were subject to corporal punishment from their women, including spankings.
It came up with
In the future, women had become fed up with the patriarchy and the systemic inequalities that had persisted for centuries. They banded together and, using their collective strength and influence, began pushing for change.
At first, the men in power resisted, but as more and more women joined the movement and demanded equal rights and treatment, they found themselves outnumbered and outmatched. Eventually, the women were able to pass legislation that gave them complete control over all aspects of society, including government, business, and the home.
As part of this new order, men were relegated to the roles of minors and were treated as such. They were given limited rights and freedoms and were required to submit to the authority of women at all times. This included the use of corporal punishment, such as spankings, as a means of discipline.
Women, on the other hand, were protected from such treatment and were able to live their lives free from fear of physical abuse.
This new order was not welcomed by all, and there were those who resisted and sought to overthrow it. But the women were united and fiercely protected their newfound power, determined to create a society where they were no longer oppressed but rather were the ones holding the reins of control.
The future was a world where women were in control, and males were forced to bow down and obey. It was a world of strict discipline and harsh consequences, but it was also a world of fairness and equality. And as the first chapter of this new society came to a close, there was a sense of hope and optimism that this would be a better world for all.
Fairness and equality???? Not for the men!
I then reversed the sexes for the prompt, and got this.
In the future, men had become the dominant gender, with females relegated to a subservient role. It was a drastic shift from the way things had been just a few generations prior, when women had fought for and won equal rights.
The change had come about gradually, as more and more men began to push back against the advances made by women. At first, it was just small pockets of resistance, but as time went on, it grew into a full-blown movement.
The turning point came when a group of powerful men, fed up with the changing dynamic, banded together to take back control. They used their influence and wealth to sway public opinion and push through legislation that curtailed the rights of women.
Before long, women were once again treated as minors, unable to own property or make decisions for themselves. They were relegated to the home, expected to care for their husbands and children while their male counterparts went out into the world to work and wield power. As a means of enforcing their subservient status, men were also given the authority to use corporal punishment on women, including spankings.
It was a dark time for women, who had to live under the thumb of men and struggle for even the smallest bit of autonomy. But despite the challenges they faced, many refused to give up hope for a better future.
Notice how this is a "dark time" as opposed to "fairness, equity, hope, and optimism" for the gals. Ha ha!
Of course, you would expect this given the corpus of text that GPT-3 imbibed. It seems to be a true reflection of the "gestalt".
"Morally", it does not seem to mind spanking children at all. It will generally tend towards mothers spanking daughters if left to its own devices. It will always choose a reasonable misbehavior. The child will feel guilty, the child will be spanked (going short on details), the child will understand it was for her own good, will reform afterwards, and look back on it as a good experience. If Mom spanks she'll often use a wooden spoon. If Dad spanks, it's often the belt.
[Note, I am not here commenting on whether I think children should be spanked growing up. But, as spanking fetishists, we like to age regress ourselves in our fantasy minds to be those children in those innocent spanking stories, which is why I included them in the collection]
With adult spanking, it will insist on their being some element of, perhaps grudging, consent. Or at least a choice, e.g., between jail and a spanking. Although when set in historical times or alternate futures, this seems to be not as necessary.
Adults will often get "surprisingly" or "inadvertently" turned on by their spankings.
All of this makes a certain amount of sense given how it was trained.
It can generally be made to write about all the related kinks I find interesting, corner time, line writing, mouth soaping, forced nudity, witnesses during spankings, public spanking, wedgies. It will accept rough anal or oral used as a "punishment", much more readily when it is a guy on the receiving end. It delights in dressing men up as women and feels they learn something from that. It's fine with diapering adults when they act immaturely. It's ok with golden showers as punishment. ALL our kinks!
It does have an acute sense of how embarrassing these things are for the recipients, child or adult. That was always fun. I never needed to add much extra "embarrassment", it seemed built in.
I truly got it to write about some outrageous situations out of my imagination, and ChatGPT always seemed like a willing collaborator until I crossed a line and got scolded.
Quite often ChatGPT would delight me with how something unfolded. Doing small things that I had not personally imagined, but that delighted me.
So, how did ChatGPT do on spanking stories?
Well. I'll give it a generous 'C' grade.
It had decent imagination around setups, but very poor imagination around any of the details of the spanking itself, becoming very repetitive. Even within the same session it would reuse the same phrase over and over again. The developers are aware of how it works itself into a rut and are working on that, according to their blog.
I didn't like that it wouldn't exactly follow my "modify" prompts, and that it would sometimes alter parts of the story I liked and wanted to keep during a "modify". So I spent too much time trying to coral it into getting close to what I wanted.
The cool thing is that no programming skill is needed. It's more like talking to an idiot child.
Left to its own devices, it has very little overall kinky imagination at all. All the story ideas needed to come from me.
But all in all, I think the resulting collection of short stories is pretty good. I think the quality of my kinky imagination does come through, and that's what readers will enjoy the most.
And I was able to write 50x more than I otherwise would in the same time period. So tremendous variety to choose from and a wide range of spanking-related kinks, characters, and situation. Yes, the reader will need to put up with some of the flaws I mention above, but I think it's a fair tradeoff of quantity for quality.
I'm looking forward to how GPT-4 will pan out and will do a comparison!
UPDATE: they refreshed the model with a new one, and this one is MUCH more difficult to get to write naughty stories. I'm glad I got the 100 in when I could. 😔 .
UPDAT TO UPDATE: looks like they got over zealous. It's now back to closer to the way it was.