At one point in my academic career I found myself facing something resembling an ethical dilemma: I had, what I felt was, a fantastic idea for a research project, hindered only by the fact that someone had conducted a very similar experiment a few years prior to my insight. To those of you unfamiliar with the way research is conducted, this might not seem like too big of a deal; after all, good science requires replications, so it seems like I should be able to go about my research anyway with no ill effects. Somewhat unfortunately for me – and the scientific method more generally – academic journals are not often very keen on publishing replications, nor are dissertation committees and other institutions that might eventually examine my resume impressed by them. There was, however, a possible “out” for me in this situation: I could try and claim ignorance. Had I not have read the offending paper, or if others didn’t know about it (“it” being either the paper’s existence or the knowledge that I read it), I could have more convincingly presented my work as entirely novel. All I had to do would be not cite the paper and write as if no work had been conducted on the subject, much like Homer Simpson telling himself, “If I don’t see it, it’s not illegal” as he runs a red light.
Since I can't travel back in time and unread the paper, presenting the idea as completely new (which might mean more credit for me) would require that I convince others that I had not read it. Attempting that, however, comes with a certain degree of risk: if other people find out that I had read the paper and failed to give proper credit, my reputation as a researcher would likely suffer as a result. Further, since I know that I read the paper, that knowledge might unintentionally leak out, resulting in my making an altogether weaker claim of novelty. Thankfully, (or not so thankfully, depending on your perspective) there's another way around this problem that doesn't involve time travel; my memory for the study could simply "fail". If I suddenly was no longer aware of the fact that I had read the paper, if those memories no longer existed or existed but could not be accessed, I could honestly claim that my research was new and exciting, making me that much better off.
Some new research by Shu and Gino (2012) asked whether our memories might function in this fashion, much like the Joo Janta 200 Super-Chromatic Peril-Sensitive Sunglasses found in The Hitchhiker's Guide to the Galaxy series: darkening at the first sign of danger, preventing the wearer from noticing and allowing them to remain blissfully unaware. In this case, however, the researchers asked whether engaging in an immoral action - cheating - might subsequently result in the actor's inability to remember other moral rules. Across four experiments, when subjects were given an opportunity to act less than honestly, either through commission or omission, they reported remembering fewer previously read moral - but not neutral - rules.
In the first of these experiments, participants read both an honor code and a list of requirements for obtaining a driver's license and they were informed that they would be answering questions about the two later. The subjects were then given a series of problems to try and solve in a given period of time, with each correct answer netting a small profit. In one of the conditions, the experimenter tallied the number of correct answers for each participant and paid them accordingly; in the other condition, subjects noted how many answers they got right and paid themselves privately, allowing for subjects to misrepresent their performance for financial gain. Following their payment, subjects were then given a memory task for the previously-read information. When given the option for cheating, about a third of the subjects took advantage of the opportunity, reporting that they had solved an additional five of the problems, on average. That some people cheated isn't terribly noteworthy; what was is that when the subjects were tested on their recall of the information they had initially read, those who cheated tended to remember fewer items concerning the honor code than those who did not (2.33 vs 3.71, respectively), but remembered similar number of items about the license rules (4 vs 3.79). The cheaters' memories seemed to be, at least temporarily, selectively impaired for moral items.
Of course, that pattern of results is open to a plausible alternative explanation: people who read the moral information less carefully were also more likely to cheat (or people who are more interested in cheating had less of an interest in moral information). The second experiment sought to rule that explanation out. In the follow-up study, subjects initially read two moral documents: the honor code and the Ten Commandments. The design was otherwise similar, minus one key detail: subjects took two memory tasks, one before they had the opportunity to cheat and another one after the fact. Before there was any option for dishonest behavior, subjects' performance on their memory for moral items was similar regardless of whether they would later cheat or not (4.33 vs 4.44, respectively). After the problem solving task, however, the subjects who cheated subsequently remembered fewer moral items about the second list they read (3.17), relative to those who did not end up cheating (4.21). The decreased performance on the memory task seemed to be specific to the subjects who cheated, but only after they had acted dishonestly; not before.
The third experiment shifted gears, looking instead at acts of omission rather than outright lying. First, subjects were asked to read the honor code as before, with one group of subjects being informed that the memory task they would later complete would yield an additional $1.50 of payment for each correct answer. This gave the subjects some incentive to remember and accurately report their knowledge of the honor code later (to try and rule out the possibility that, previously, subjects had remembered the same amount of moral information, but just neglected to report that they did). Next, subjects were asked to solve some SAT problems on a computer, and each correct answer would, as before, net the subject some additional payment. However, some subjects were informed that the program they were working with contained a glitch that would cause the correct answer to be displayed on the screen five seconds after the problem appeared unless they hit the space bar. The results showed that, of the subjects that knew the correct answer would pop up on the screen, almost all of them (minus one very moral subject) made use of that glitch at least once during the experiment and, ss before, the cheaters recalled fewer moral items than the non-cheating groups (4.53 vs 6.41). Further, while the incentives for accurate recall were effective in the non-cheating group (they remembered more items when they were paid for each correct answer), this was not the case for the cheaters: whether they were being paid to remember or not, the cheaters still remembered about the same amount of information.
Forgetting about the forth experiment for now, I'd like to consider why we might expect to see this pattern of results. Shu and Gino (2012) suggest that such motivated forgetting might help in "reducing dissonance and regret", to maintain one's "self-image". Such explanations are not even theoretically plausible functions for this kind of behavior, as "feeling good", in and of itself, doesn't do anything useful. In fact, forgetting moral rules could be harmful, to the extent that it might make one more likely to commit acts that others would morally condemn, resulting in increased social sanctions or physical aggression. However, if such ignorance was used strategically, it might allow the immoral actor in question to mitigate the extent of that condemnation. That is to say, committing certain immoral acts out of ignorance is seen as being less deserving of punishment than committing them intentionally, so if you can persuade others that you just made a mistake, you'd be better off.
While such an explanation might be at least plausible, there are some major issues with it, namely that cheating-contingent rule forgetting is, well, contingent on the fact that you cheated. Some cognitive system needs to know that you cheated in the first place to start suppressing one's memory for moral rule accessibility, and if that system knows that a moral rule has been violated, it may leak that information into the world (in other words, it might cause the same problem that it was hypothesized to solve). Relatedly, suppressing memory accessiability for moral rules more generally, specifically moral rules unrelated to the current situation, probably won't do you much good when it comes to persuading others that you didn't know the moral rule which you broke in the first place - what they'll likely be condemning you for. If you're caught stealing, forgetting that adultery is immoral won't help out (and claiming that you didn't know stealing was immoral is itself not the most believable of excuses).
That said, the function behind the cognitive mechanisms generating this pattern of results likely does involve persuasion at its conceptual core. That people have difficulty accessing moral information after they've done something less than moral probably represents some cognitive systems for moral condemnation becoming less active (one side-effect of which is that your memory for moral rules isn't accessed, as one isn't trying to find a moral violation), while systems for defending against moral condemnation come online. Indeed, as the forth, unreviewed, study found, even moral words appeared to be less accessible; not just rules. However, this was only the case for cheaters who had been exposed to an honor code; when there was less of a need to defend against condemnation (when one didn't cheat or hadn't been exposed to an honor code), those systems stayed relatively dormant.
References: Shu, L., & Gino, F. (2012). Sweeping dishonesty under the rug: How unethical actions lead to forgetting of moral rules. Journal of Personality and Social Psychology, 102 (6), 1164-1177