All That’s Unpleasant Does Not Punish

I’ve written a lot about the behavior science definitions of reinforcement and punishment. That’s because they can trip us up so easily. Something can be attractive, but not always reinforce behavior. Something can be unpleasant, but not serve to decrease behavior even when it looks like it should. This story is about a natural consequence that seemed like it would decrease behavior but didn’t.

garden fence made of PVC and chicken wire
This garden fence proved to be an effective barrier for a certain beagle

Daisy was frail for a beagle. She was already infested with heartworms when she came to me years ago. The treatment for heartworms then was brutal. She survived the treatment under my care and was cured of heartworms, but she was never strong afterward.

At the time, I didn’t think she was very smart. But now I know I didn’t have the knowledge to determine that. It’s amazing how “smart” our dogs suddenly get when we start using positive reinforcement. She never had the chance.

But I feel confident in saying she was persistent. Very, very persistent.

Fertilizer in the Garden

When I moved into a new house in 1998, it was my first chance to have a garden in my own yard. I built raised beds with cedar landscape timbers. Then, with the help of a friend, I built a little 24″ fence made out of PVC, chicken wire, and plastic ties. It was a durable little fence; I only took it down in 2011 after Clara climbed it as a tiny puppy. That was not a safe move, so the fence came down that day. It wasn’t serving much purpose by then, anyway. Summer and Zani (both trained in agility) could jump it, although they didn’t do so often.

Back to 1998—Daisy was not strong enough to jump the fence. But when I used fertilizer, she wanted in there mightily. Beagle -> stink -> must get there. So she started going to a corner of the fence and pushing her nose at the base between the vertical pieces of  PVC. Here’s a pic of the fence (from much later—2009) where you can see the gap between the corner vertical pieces. I would periodically fasten those together at the base, but the grass and weeds would push them apart and break the plastic ties.

A garden fence made of white PVC. The pinch of PVC around your nose would typically be an aversive stimulus
This photo of the fence shows one of the corners where Daisy pushed her nose.

Just How Persistent Was Daisy?

So when I fertilized in the garden for the first time, it became an instant beagle magnet. After circling around the garden and not finding any significant holes in the fence, Daisy settled on a corner with a gap.  She tried to push her way in with her nose between the poles. She would push her nose, then yelp, push her nose, then yelp, over and over again. Honestly, that’s what she did. She put her whole body into those nose pushes. Then yelled from the resulting pinch.

She did that on and off for several days. I have no idea why I didn’t do something to prevent it. I guess I kept thinking she would stop.  These days I would intervene for sure.

It Had To Hurt, But….

So here’s the question. Was getting her nose pinched over and over punishment? Check out this definition.

1. A particular behavior occurs.
2. A consequence immediately follows the behavior.
3. As a result, the behavior is less likely to occur again in the future. The behavior is weakened (Miltenberger, 2008, p. 120).

Part of the definition of punishment in behavior science is that behavior must decrease. But there was no decrease in  Daisy’s behavior in that situation. You would think getting one’s nose pinched between vertical staves would decrease the behavior that resulted in the pinch, but it didn’t. She got a few dozen nose pinches over a few days.

What we would expect from a nose pinch would be a positive punishment process.

Like this:

Antecedent: There is fertilizer in the garden on the other side of the fence
Behavior: Daisy pushes her nose between two poles in the fence
Consequence: The poles press back on her nose
Prediction: Pushing her nose between the poles will decrease.
What’s the process? Positive punishment

Except it didn’t happen that way. She didn’t stop after one or two pinches. Even though she yelped most of the times she pushed, she kept trying.

Why She Stopped

But after a few days, she finally stopped. Was positive punishment finally kicking in, or something else? I think it was something else. When positive punishment happens, it’s usually right away, in response to an unpleasant stimulus of some magnitude.

Instead, Daisy kept doing push/yell. When she finally, gradually stopped (after a few days) I think instead it was extinction at work. The pattern of her behavior fit extinction more than it did punishment. The definition of extinction is:

1. A behavior that has been previously reinforced
2. no longer results in the reinforcing consequences
3. and, therefore, the behavior stops occurring in the future (Miltenberger, 2008, p. 102).

Zani says, “I can relate. I would like in there!”

Certainly, pushing her nose against things to move them had a reinforcement history for Daisy. It had worked many times. But this time, the reinforcement didn’t happen. Pushing her nose all around the garden fence didn’t get her access to the fertilizer. So she finally stopped.

Another possibility is that the odor decreased over time to a non-enticing level. A decrease in odor would affect the strength of the antecedent. But knowing her beagle nose, I bet it was extinction. She stopped because the behavior wasn’t getting her what she wanted.

Aversive Stimuli

One of the ways behavior science is challenging is that it prevents us from generalizing in the ways we humans like to. It’s situational. That’s the lesson I have learned recently about aversive, unpleasant stimuli. (I am using “aversive” in a general sense: an unpleasant stimulus that will often change behavior. But its definition is not dependent on that occurring.)

A stimulus can change behavior sometimes but not others. There are hardly any absolutes.

My story illustrates a couple of important things about aversive stimuli that have been sinking in for me lately.

  1. Something can be very unpleasant and still not affect behavior. We can call it an aversive or noxious stimulus because it’s something that is normally unpleasant for that species. But we can’t call it a punisher or negative reinforcer unless those respective changes in behavior happen.
  2. Stimuli are situational. A stimulus can change behavior at one time, and not at another.  Or it can even be a reinforcer at one time and a punisher at another. When you put it on, a sweater provides relief from cold in the form of automatic negative reinforcement. But donning a sweater when it is very hot outside is aversive. If I forced you to put on a sweater every time you came to my backyard in the summer, you would come less often (Mayer, 2018, p. 686).

Here is an article that demonstrates the same event operating as both a punisher and a reinforcer for the same rats in different antecedent arrangements.

We know #2 above through life experience. We know that sometimes we want things and will work to get them, but at other times we will work to avoid the same things or be indifferent. But it can be really hard to remember to let that knowledge guide our use of terminology in behavior science.

Does This Mean Hurting Your Dog Repeatedly Is OK If No Behavior Changes?

Of course not. And why would you even do that?

But noting the (non)effect of a low-level aversive stimulus can teach us a lot. It’s another reason not to try to use positive punishment in training. Because unless you use a stimulus at a knockout level you are more likely to get the “Daisy” situation. “I don’t like this, but I’ll keep going because there’s something I really want to do or get to.” Going to the knockout level is inhumane and risks terrible side effects. Remember: Skinner originally concluded that punishment didn’t work to change behavior. It’s because the unpleasant stimuli he was using were not intense enough.

This post in no way supports the idea that it’s OK to do painful things to our dogs as long as their behavior doesn’t change. Or if their behavior does change, for that matter.

I think I’ve experienced every typical misunderstanding about behavior science out there. I’m sure I’ll continue to do so. And when I have worked through those misunderstandings, it’s gratifying to understand the science just a little bit better. I am driven to write about it because I want to help others who are on a similar journey of learning.

Daisy, the star of the story. This is my only photo of her.

References

Mayer, G. R.,  Sulzer-Azaroff, B., Wallace, M. (2018). Behavior analysis for lasting change. Sloan.

Miltenberger, R. G. (2008). Behavior modification: Principles and procedures. Wadsworth. Belmont, MA.

Related Posts

Copyright 2018 Eileen Anderson

4 thoughts on “All That’s Unpleasant Does Not Punish

  1. “Remember: Skinner originally concluded that punishment didn’t work to change behavior. It’s because the unpleasant stimuli he was using were not intense enough.”
    This kind of flies in the face of people saying that “good” ecollar trainers only use low level shocks that “don’t hurt” the dog. Seems to me you’d need to turn it way up and shock the heck out of the dog so they never want to disobey again to get the desired effect. Then there’s the ones who use the low level shock as a cue I think? The shock cues the dog to do something? Now how does that work, I wonder, and why would you use shock as a cue when you could just use your voice? And what about the claim that they use the shock “like a clicker” to mark the desired behavior? Again, why would you do that when you can just use a clicker? Wow, that’s a lot of questions. You’ll have to excuse my thirst for knowledge; finals are over and I’m no used to having nothing to study now. 😉

    1. Yeah, a lot of the stuff you are talking about here has to do with using shock in a negative reinforcement process, not punishment. For negative reinforcement, the shock is turned on and stays on until the dog performs the desired behavior. So it would be possible to use a low-level shock as a cue for a dog if you had used a moderate level duration shock in a negative reinforcement scenario first. Then the trainer can decrease the shock level. The application of the low level shock operates similarly to a threat. Just like a horse rider doesn’t have to use spurs except for very light pressure after the horse has learned that they **can** hurt. And in negative reinforcement, the onset of the aversive stimulus does act as a cue.

      Negative reinforcement is a whole different ball game with regard to the magnitude of the aversive stimulus. It doesn’t take much magnitude at all to get behavior. It takes a high magnitude to punish behavior.

      And although it is theoretically possible to condition a very low-level shock as a bridging stimulus, like a clicker, almost all the people who talk about doing that aren’t really doing it. They are just playing mind games to justify shock collar use.

      Good questions! If you haven’t read it before, you might want to read my post on automatic negative reinforcement and watch the movie. It gives you an idea of how tiny of a stimulus can be used for negative reinforcement. And remember, that’s not generally true of punishment.

Comments are closed.

Copyright 2021 Eileen Anderson All Rights Reserved By accessing this site you agree to the Terms of Service.
Terms of Service: You may view and link to this content. You may share it by posting the URL. Scraping and/or copying and pasting content from this site on other sites or publications without written permission is forbidden.