Law enforcement agents and prosecutors are increasingly using artificial intelligence tools, AI-generated content, and other personal digital activity to investigate and build cases against their suspects. Information that people previously considered personal (e.g., AI prompts, search histories, draft images) is now a common part of law enforcement’s investigative arsenal. This shift carries profound implications for criminal defense, especially in the federal arena.
One high-profile example, the prosecution of Jonathan Rinderknecht (charged in connection with the deadly 2024 Pacific Palisades fire) is a case in point. Federal prosecutors are using digital prompts and AI activity to construct a narrative of intent, obsession and dangerousness, aimed at securing a conviction.
In this blog post, I break down why this is a critical developing area for federal criminal defense, what the key issues are, and how we should be thinking about digital footprints.
What We Know So Far about the Rinderknecht Case
Though the facts are still emerging, publicly reported information on the Rinderknecht case offers a useful frame for examining the use of AI-based evidence in criminal investigations and prosecutions:
- Case details: Rinderknecht was charged with malicious destruction of property by fire and arrested in Melbourne, Florida, on October 7, 2025. Prosecutors allege that Rinderknecht intentionally set the Lachman Fire in Los Angeles on January 1, 2025. The fire was initially contained, but it supposedly smoldered underground for days before high winds caused it to re-ignite as the Palisades Fire on January 7. The fire eventually burned more than 23,000 acres over more than three weeks, killed 12 people, destroyed more than 6,800 structures, and caused billions of dollars’ worth of damage.
- Court hearing: Rinderknecht was held by federal authorities in Florida and made his first court appearance in U.S. District Court for the Middle District of Florida in Orlando on October 8, 2025. At his detention hearing, his lawyer appealed for bail, citing his clean criminal record, but the judge denied it, claiming the defendant was a flight risk and mental health concerns.
- Extradition: Following a federal indictment on October 15, on the original charge plus two additional felonies (one count of arson affecting property used in interstate commerce and one count of timber set afire), Rinderknecht’s case was moved to Los Angeles, where the alleged crimes took place.
- Plea and trial: During his arraignment in U.S. District Court for the Central District of California on October 23, Rinderknecht pleaded not guilty to the three felony charges. A trial date has been set for December 16.
Rinderknecht was ordered to remain in prison without bond as he awaits trial. If convicted, he faces a mandatory minimum sentence of five years in prison and a statutory maximum sentence of 45 years.
How Prosecutors Have Used Rinderknecht’s AI Prompts and Online History
During the investigation leading up to Rinderknecht’s arrest, Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) agents conducted more than 500 scientific tests and examined more than 13,000 pieces of evidence. Chief among the prosecution’s pieces of evidence used to create a narrative of Rinderknecht’s intent, obsession and dangerousness is his online history, especially ChatGPT prompts:
- November 1, 2024, he confided in ChatGTP, saying that he “literally burnt the Bible” and that made him feel “amazing” and “so liberated.”
- July 11, 2024, he asked ChatGPT to create “a dystopian painting” with “a burning forest,” people running from the fire and rich people behind a wall enjoying themselves watching the world burn and the poor people struggle.
- December 31, 2024, at approximately 11:28 p.m., he listened to a French rap song with a theme of despair and bitterness. Google records show that in the previous four days, he had listened to the same song nine times and had watched the music video, which shows the singer lighting things on fire, three times.
- January, 1, 2025, at approximately 12:17 a.m., he called and reported the fire (by that point a local resident had already reported the fire to 911). During the call, typed a question into the ChatGPT app on his iPhone, asking, “Are you at fault if a fire is lift [sic]because of your cigarettes.” (ChatGPT’s response was “Yes,”followed by an explanation.)
Prosecutors used the 2024 AI prompts and Google search records, along with other biographical information suggesting mental health issues and violent tendencies, to argue that Rinderknecht should be kept in custody until his trial. As previously stated, he is currently being held in California without bond.
Taking a different approach, the ATF agent who wrote the criminal complaint against Rinderknecht interpreted his AI activity on January 1 as evidence of deceit and malicious intent rather than exculpation. The agent claimed that Rinderknecht “wanted to preserve evidence of himself trying to assist in the suppression of the fire” and to “create evidence regarding a more innocent explanation for the cause of the fire.” In essence, the agent attempted to reframe what could be viewed as a defendant’s effort to document his good faith into proof of guilt, treating the act of creating potentially exonerating material as a sign of manipulation and consciousness of wrongdoing.
Concerns about How Prosecutors Use AI Prompts as Evidence in Federal Cases
As suggested by the Rinderknecht case, law enforcement agencies and prosecutors are now using defendants’ digital footprint, including AI-generated prompts, searches, and content, to shape powerful narratives in court. What prosecutors formerly viewed as background evidence of online behavior has evolved into a cornerstone of prosecutorial strategies:
AI Prompts as a Window into Intent and State of Mind
At the charging and detention stages, prosecutors frequently present AI search histories and prompt logs as evidence of a defendant’s mindset. A request typed into an AI tool, whether “show me a city on fire” or “how to build an explosive,” can be portrayed as proof of motive, obsession, or premeditation. In federal cases, where the government must prove intent beyond a reasonable doubt, these digital fragments are now used to suggest that curiosity equates to criminal planning.
During detention hearings, such as the one held in Orlando in the Rinderknecht case, prosecutors cite AI-related activity to argue that a defendant poses a danger to the community or lacks mental stability. The prosecution’s arguments reflect a growing trend: the transformation of private, creative, or even therapeutic digital experimentation into government evidence of “dangerousness.” At sentencing, they may use similar evidence to argue for harsher penalties, framing the defendant’s digital conduct as aggravating behavior.
The effect is a subtle but significant shift: private digital expression, once irrelevant to criminal conduct, is now being repurposed to define a defendant’s psychological profile. This shift raises many serious concerns about the validity of AI prompts and output as a demonstration and measure of dangerousness or aggravating behavior. Should AI prompts be viewed as a sign of dangerousness? Should they be viewed as aggravating behavior? What exactly is their relationship to the person who wrote them and asked for the output?
Expanding the Scope of Digital Evidence
Traditionally, federal investigators have relied on text messages, browser histories, social-media posts, and location data. AI now adds a new evidentiary layer: interactive prompts and generated outputs. These records, drawn from AI platforms’ internal logs, can reveal a person’s private questions, imaginings, or fears, material that was never intended for public view.
From a defense standpoint, this raises serious concerns. AI models are “black boxes,” meaning their inner workings are not transparent. How an AI system interprets a user’s prompt or produces an image is often unknowable, even to experts. This opacity complicates digital forensics and leads to serious concerns about the admissibility of such evidence.
AI-Generated Content as Prosecutorial Exhibit
In cases like Rinderknecht’s, prosecutors have seized on AI-generated images, scenes of burning cities or apocalyptic landscapes, as visual evidence of a defendant’s fascination with destruction. Such exhibits are emotionally powerful in court, but they pose major evidentiary problems: Was the image produced by the defendant’s prompt or an automated AI suggestion? Could it have been altered? What was the context of its creation?
Without rigorous forensic validation, the risk is that AI-derived material functions less as factual proof and more as character evidence, inviting jurors to conflate artistic imagination with criminal intent.
The Challenges of AI-Related Evidence for Federal Defense Attorneys
For federal defense attorneys, the growing use of AI-derived evidence presents complex technical, procedural, and constitutional challenges. As prosecutors increasingly rely on defendants’ digital searches, AI prompts, and generated images to infer motive or dangerousness, defense lawyers must be prepared to confront these evolving evidentiary tactics with both precision and vigilance. Every stage of a federal case, from discovery and detention to trial, now carries the potential for AI-generated data to play a central role.
The first challenge lies in discovery and forensic review. Defense counsel must demand complete access to device images, prompt logs, and metadata to assess how the AI-related material was obtained, whether proper warrants or consent were secured, and whether the chain of custody has been preserved. Forensic experts should be engaged early to examine the authenticity of the data: whether prompts were user-initiated or auto-generated, and whether the AI model or device history may have influenced or distorted the output. In a federal context, even minor gaps in the chain of custody or inconsistencies in metadata can form the basis for suppression or impeachment.
Equally important is the need to challenge the prosecution’s interpretation of digital behavior. Typing a prompt such as “show me a burning forest” does not inherently prove criminal intent or premeditation. Defense attorneys must emphasize the gap between digital expression and real-world conduct, between curiosity or fantasy and actual planning or execution of a crime. They should demand that prosecutors articulate a clear causal connection between the alleged prompt activity and the charged act. Without such a link, digital material risks functioning as prejudicial character evidence rather than proof of intent.
Admissibility and authenticity also present significant hurdles. AI models operate as “black boxes,” with internal algorithms that even experts cannot fully explain. This opacity complicates the forensic validation standards, which mandate reliable, reproducible methods for expert testimony. Defense counsel must press for algorithmic transparency and be ready to expose potential bias or error in the AI system itself. Issues of hearsay and authorship, whether a prompt log constitutes a “statement” by the defendant or an interpretive record by the AI tool, should be carefully examined to prevent untested assumptions from reaching the jury.
Beyond evidentiary battles, mitigation and context are critical to counter prosecutorial narratives of obsession or dangerousness. Many defendants who engage with AI tools do so for creative, therapeutic, or exploratory reasons, not as a prelude to criminal conduct. Defense counsel should humanize clients by highlighting mental-health treatment, community ties, and the absence of any concrete steps toward committing the alleged crime. In detention hearings, as seen in the Orlando proceeding involving Jonathan Rinderknecht, prosecutors are increasingly using AI activity to argue for continued detention. Defense lawyers must respond with compelling evidence of stability and conditions that rebut claims of ongoing risk.
Finally, federal defense attorneys must adopt proactive strategies for clients who may be under investigation. Clients should be warned that AI-generated material and search histories are not private and can be recovered and misinterpreted. Counsel should preserve data immediately, document the context behind prompts, and engage qualified experts who can explain technical and psychological nuances to the court. Pretrial motions should challenge the scope and reliability of AI-derived evidence, and detention plans should include robust mitigation packages to prevent prosecutors from using digital history as a proxy for future threat.
In short, defending against AI-related evidence requires the same meticulous scrutiny applied to DNA or forensic science but with added layers of technological complexity. Federal courts are only beginning to grapple with how to treat AI-derived material, and until clearer standards emerge, defense attorneys must ensure that due process, not algorithmic speculation, governs how such evidence is used in criminal prosecutions.
Get Help from an Experienced, Savvy Federal Defense Attorney
Jonathan Rinderknecht’s case signals a new frontier: digital prompts and AI content moving from passive interest to active prosecution tool. Defense attorneys must adapt accordingly. As the lines between imagination, interest, fantasy and criminal action blur under the digital microscope, the fundamentals of criminal defense (presumption of innocence, contesting intent, authenticity of evidence) grow ever more essential.
At Haas Law, we remain vigilant in this evolving space. We understand that federal investigations today often extend far beyond physical evidence or witness testimony; they reach into a person’s private digital life. When prosecutors use AI prompts, search histories, or generated content to build a narrative of guilt, those accused need a defense team that can meet that technology head-on.
Attorney David Haas combines deep experience in federal criminal defense with a strategic understanding of how digital evidence is gathered, interpreted, and challenged. Our firm works diligently to uncover weaknesses in the government’s case, expose overreach, and ensure that constitutional protections are not eroded by technological novelty.
If you or a loved one is facing federal investigation where digital/AI evidence may be implicated, early action is vital. The prompts you typed, the bot you consulted, the images you generated will not simply vanish. They may become part of the government’s narrative. Early intervention by an experienced, knowledgeable federal defense attorney can make all the difference in protecting your rights, your reputation, and your future.
Call Haas Law today at 407-392-9299 or fill in the “Tell Us What Happened” form on our website to get the vital protection and defense you need.
