Salta al contenuto principale


Power Companies Are Using AI To Build Nuclear Power Plants


Tech companies are betting big on nuclear energy to meet AIs massive power demands and they're using that AI to speed up the construction of new nuclear power plants.

Microsoft and nuclear power company Westinghouse Nuclear want to use AI to speed up the construction of new nuclear power plants in the United States. According to a report from think tank AI Now, this push could lead to disaster.

“If these initiatives continue to be pursued, their lack of safety may lead not only to catastrophic nuclear consequences, but also to an irreversible distrust within public perception of nuclear technologies that may inhibit the support of the nuclear sector as part of our global decarbonization efforts in the future,” the report said.
playlist.megaphone.fm?p=TBIEA2…
The construction of a nuclear plant involves a long legal and regulatory process called licensing that’s aimed at minimizing the risks of irradiating the public. Licensing is complicated and expensive but it’s also largely worked and nuclear accidents in the US are uncommon. But AI is driving a demand for energy and new players, mostly tech companies like Microsoft, are entering the nuclear field.

“Licensing is the single biggest bottleneck for getting new projects online,” a slide from a Microsoft presentation about using generative AI to fast track nuclear construction said. “10 years and $100 [million.]”

The presentation, which is archived on the website for the US Nuclear Regulatory Commission (the independent government agency that’s charged with setting standards for reactors and keeping the public safe), detailed how the company would use AI to speed up licensing. In the company’s conception, existing nuclear licensing documents and data about nuclear sites data would be used to train an LLM that’s then used to generate documents to speed up the process.

But the authors of the report from AI Now told 404 Media that they have major concerns about trusting nuclear safety to an LLM. “Nuclear licensing is a process, it’s not a set of documents,” Heidy Khlaaf, the head AI scientist at the AI Now Institute and a co-author of the report, told 404 Media. “Which I think is the first flag in seeing proposals by Microsoft. They don’t understand what it means to have nuclear licensing.”

“Please draft a full Environmental Review for new project with these details,” Microsoft’s presentation imagines as a possible prompt for an AI licensing program. The AI would then send the completed draft to a human for review, who would use Copilot in a Word doc for “review and refinement.” At the end of Microsoft’s imagined process, it would have “Licensing documents created with reduced cost and time.”

The Idaho National Laboratory, a Department of Energy run nuclear lab, is already using Microsoft’s AI to “streamline” nuclear licensing. “INL will generate the engineering and safety analysis reports that are required to be submitted for construction permits and operating licenses for nuclear power plants,” INL said in a press release. Lloyd's Register, a UK-based maritime organization, is doing the same. American power company Westinghouse is marketing its own AI, called bertha, that promises to make the licensing process go from "months to minutes.”

The authors of the AI Now report worry that using AI to speed up the licensing process will bypass safety checks and lead to disaster. “Producing these highly structured licensing documents is not this box taking exercise as implied by these generative AI proposals that we're seeing,” Khlaaf told 404 Media. “The whole point of the lesson in process is to reason and understand the safety of the plant and to also use that process to explore the trade offs between the different approaches, the architectures, the safety designs, and to communicate to a regulator why that plant is safe. So when you use AI, it's not going to support these objectives, because it is not a set of documents or agreements, which I think you know, is kind of the myth that is now being put forward by these proposals.”

Sofia Guerra, Khlaaf’s co-author, agreed. Guerra is a career nuclear safety expert who has advised the U.S. Nuclear Regulatory Commission (NRC) and works with the International Atomic Energy Agency (IAEA) on the safe deployment of AI in nuclear applications. “This is really missing the point of licensing,” Guerra said of the push to use AI. “The licensing process is not perfect. It takes a long time and there’s a lot of iterations. Not everything is perfectly useful and targeted …but I think the process of doing that, in a way, is really the objective.”

Both Guerra and Khlaaf are proponents of nuclear energy, but worry that the proliferation of LLMs, the fast tracking of nuclear licenses, and the AI-driven push to build more plants is dangerous. “Nuclear energy is safe. It is safe, as we use it. But it’s safe because we make it safe and it’s safe because we spend a lot of time doing the licensing and we spend a lot of time learning from the things that go wrong and understanding where it went wrong and we try to address it next time,” Guerra said.

Law is another profession where people have attempted to use AI to streamline the process of writing complicated and involved technical documents. It hasn’t gone well. Lawyers who’ve attempted to write legal briefs have been caught, over and over again, in court. AI-constructed legal arguments cite precedents that do not exist, hallucinate cases, and generally foul up legal proceedings.

Might something similar happen if AI was used in nuclear licensing? “It could be something as simple as software and hardware version control,” Khlaaf said. “Typically in nuclear equipment, the supply chain is incredibly rigorous. Every component, every part, even when it was manufactured is accounted for. Large language models make these really minute mistakes that are hard to track. If you are off in the software version by a letter or a number, that can lead to a misunderstanding of which software version you have, what it entails, the expectation of the behavior of both the software and the hardware and from there, it can cascade into a much larger accident.”

Khlaaf pointed to Three Mile Island as an example of an entirely human-made accident that AI may replicate. The accident was a partial nuclear meltdown of a Pennsylvania reactor in 1979. “What happened is that you had some equipment failure and design flaws, and the operators misunderstood what those were due to a combination of a lack of training…that they did not have the correct indicators in their operating room,” Khlaaf said. “So it was an accident that was caused by a number of relatively minor equipment failures that cascaded. So you can imagine, if something this minor cascades quite easily, and you use a large language model and have a very small mistake in your design.”

In addition to the safety concerns, Khlaaf and Guerra told 404 Media that using sensitive nuclear data to train AI models increases the risk of nuclear proliferation. They pointed out that Microsoft is asking not only for historical NRC data but for real-time and project specific data. “This is a signal that AI providers are asking for nuclear secrets,” Khlaaf said. “To build a nuclear plant there is actually a lot of know-how that is not public knowledge…what’s available publicly versus what’s required to build a plant requires a lot of nuclear secrets that are not in the public domain.”

“This is a signal that AI providers are asking for nuclear secrets. To build a nuclear plant there is actually a lot of know-how that is not public knowledge…what’s available publicly versus what’s required to build a plant requires a lot of nuclear secrets that are not in the public domain.”


Tech companies maintain cloud servers that comply with federal regulations around secrecy and are sold to the US government. Anthropic and the National Nuclear Security Administration traded information across an Amazon Top Secret cloud server during a recent collaboration, and it’s likely that Microsoft and others would do something similar. Microsoft’s presentation on nuclear licensing references its own Azure Government cloud servers and notes that it’s compliant with Department of Energy regulations. 404 Media reached out to both Westinghouse Nuclear and Microsoft for this story. Microsoft declined to comment and Westinghouse did not respond.

“Where is this data going to end up and who is going to have the knowledge?” Guerra told 404 Media.

💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.

Nuclear is a dual use technology. You can use the knowledge of nuclear reactors to build a power plant or you can use it to build a nuclear weapon. The line between nukes for peace and nukes for war is porous. “The knowledge is analogous," Khlaaf said. “This is why we have very strict export controls, not just for the transfer of nuclear material but nuclear data.”

Proliferation concerns around nuclear energy are real. Fear that a nuclear energy program would become a nuclear weapons program was the justification the Trump administration used to bomb Iran earlier this year. And as part of the rush to produce more nuclear reactors and create infrastructure for AI, the White House has said it will begin selling old weapon-grade plutonium to the private sector for use in nuclear reactors.

Trump’s done a lot to make it easier for companies to build new nuclear reactors and use AI for licensing. The AI Now report pointed to a May 23, 2025 executive order that seeks to overhaul the NRC. The EO called for the NRC to reform its culture, reform its structure, and consult with the Pentagon and the Department of Energy as it navigated changing standards. The goal of the EO is to speed up the construction of reactors and get through the licensing process faster.

A different May 23 executive order made it clear why the White House wants to overhaul the NRC. “Advanced computing infrastructure for artificial intelligence (AI) capabilities and other mission capability resources at military and national security installations and national laboratories demands reliable, high-density power sources that cannot be disrupted by external threats or grid failures,” it said.

At the same time, the Department of Government Efficiency (DOGE) has gutted the NRC. In September, members of the NRC told Congress they were worried they’d be fired if they didn’t approve nuclear reactor designs favored by the administration. “I think on any given day, I could be fired by the administration for reasons unknown,” Bradley Crowell, a commissioner at the NRC said in Congressional testimony. He also warned that DOGE driven staffing cuts would make it impossible to increase the construction of nuclear reactors while maintaining safety standards.

“The executive orders push the AI message. We’re not just seeing this idea of the rollback of nuclear regulation because we’re suddenly very excited about nuclear energy. We’re seeing it being done in service of AI,” Khlaaf said. “When you're looking at this rolling back of Nuclear Regulation and also this monopolization of nuclear energy to explicitly power AI, this raises a lot of serious concerns about whether the risk associated with nuclear facilities, in combination with the sort of these initiatives can be justified if they're not to the benefit of civil energy consumption.”

Matthew Wald, an independent nuclear energy analyst and former New York Times science journalist is more bullish on the use of AI in the nuclear energy field. Like Khlaaf, he also referenced the accident at Three Mile Island. “The tragedy of Three Mile Island was there was a badly designed control room, badly trained operators, and there was a control room indication that was very easy to misunderstand, and they misunderstood it, and it turned out that the same event had begun at another reactor. It was almost identical in Ohio, but that information was never shared, and the guys in Pennsylvania didn't know about it, so they wrecked a reactor,” Wald told 404 Media.

"AI is helpful, but let’s not get messianic about it.”


According to Wald, using AI to consolidate government databases full of nuclear regulatory information could have prevented that. “If you've got AI that can take data from one plant or from a set of plants, and it can arrange and organize that data in a way that's helpful to other plants, that's good news,” he said. “It could be good for safety. It could also just be good for efficiency. And certainly in licensing, it would be more efficient for both the licensee and the regulator if they had a clearer idea of precedent, of relevant other data.”

He also said that the nuclear industry is full of safety-minded engineers who triple check everything. “One of the virtues of people in this business is they are challenging and inquisitive and they want to check things. Whether or not they use computers as a tool, they’re still challenging and inquisitive and want to check things,” he said. “And I think anybody who uses AI unquestionably is asking for trouble, and I think the industry knows that…AI is helpful, but let’s not get messianic about it.”

But Khlaaf and Guerra are worried that the framing of nuclear power as a national security concern and the embrace of AI to speed up construction will setback the embrace of nuclear power. If nuclear isn’t safe, it’s not worth doing. “People seem to have lost sight of why nuclear regulation and safety thresholds exist to begin with. And the reason why nuclear risks, or civilian nuclear risk, were ever justified, was due to the capacity for nuclear power. To provide flexible civilian energy demands at low cost emissions in line with climate targets,” Khlaaf said.

“So when you move away from that…and you pull in the AI arms race into this cost benefit justification for risk proportionality, it leads government to sort of over index on these unproven benefits of AI as a reason to have nuclear risk, which ultimately undermines the risks of ionizing radiation to the general population, and also the increased risk of nuclear proliferation, which happens if you were to use AI like large language models in the licensing process.”


18 Lawyers Caught Using AI Explain Why They Did It


Earlier this month, an appeals court in California issued a blistering decision and record $10,000 fine against a lawyer who submitted a brief in which “nearly all of the legal quotations in plaintiff’s opening brief, and many of the quotations in plaintiff’s reply brief, are fabricated” through the use of ChatGPT, Claude, Gemini, and Grok. The court said it was publishing its opinion “as a warning” to California lawyers that they will be held responsible if they do not catch AI hallucinations in their briefs.

In that case, the lawyer in question “asserted that he had not been aware that generative AI frequently fabricates or hallucinates legal sources and, thus, he did not ‘manually verify [the quotations] against more reliable sources.’ He accepted responsibility for the fabrications and said he had since taken measures to educate himself so that he does not repeat such errors in the future.”

As the judges remark in their opinion, the use of generative AI by lawyers is now everywhere, and when it is used in ways that introduce fake citations or fake evidence, it is bogging down courts all over America (and the world). For the last few months, 404 Media has been analyzing dozens of court cases around the country in which lawyers have been caught using generative AI to craft their arguments, generate fictitious citations, generate false evidence, cite real cases but misinterpret them, or otherwise take shortcuts that has introduced inaccuracies into their cases. Our main goal was to learn more about why lawyers were using AI to write their briefs, especially when so many lawyers have been caught making errors that lead to sanctions and that ultimately threaten their careers and their standings in the profession.

To do this, we used a crowdsourced database of AI hallucination cases maintained by the researcher Damien Charlotin, which so far contains more than 410 cases worldwide, including 269 in the United States. Charlotin’s database is an incredible resource, but it largely focuses on what happened in any individual case and the sanctions against lawyers, rather than the often elaborate excuses that lawyers told the court when they were caught. Using Charlotin’s database as a starting point, we then pulled court records from around the country for dozens of cases where a lawyer offered a formal explanation or apology. Pulling this information required navigating clunky federal and state court record systems and finding and purchasing the specific record where the lawyer in question tried to explain themselves (these were often called “responses to order to show cause.”) We also reached out to lawyers who were sanctioned for using AI to ask them why they did it. Very few of them responded, but we have included explanations from the few who did.

What we found was incredibly fascinating, and reveals a mix of lawyers blaming IT issues, personal and family emergencies, their own poor judgment and carelessness, and demands from their firms and the industry to be more productive and take on more casework. But most often, they simply blame their assistants.

Few dispute that the legal industry is under great pressure to use AI. Legal giants like Westlaw and LexisNexis have pitched bespoke tools to law firms that are now regularly being used, but Charlotin’s database makes clear that lawyers are regularly using off-the-shelf generalized tools like ChatGPT and Gemini as well. There’s a seemingly endless number of startups selling AI legal tools that do research, write briefs, and perform other legal tasks. While working on this article, it became nearly impossible to keep up with new cases of lawyers being sanctioned for using AI. Charlotin has documented 11 new cases within the last week alone.

This article is the first of several 404 Media will write exploring the use of AI in the legal profession. If you’re a lawyer and have thoughts or firsthand experiences, please get in touch. Some of the following anecdotes have been lightly edited for clarity.

💡
Are you a lawyer or do you work in the legal industry? We want to know how AI is impacting the industry, your firm, and your job. Using a non-work device, you can message me securely on Signal at jason.404. Otherwise, send me an email at jason@404media.co.

A lawyer in Indiana blames the court (Fake case cited)

A judge stated that the lawyer “took the position that the main reason for the errors in his brief was the short deadline (three days) he was given to file it. He explained that, due to the short timeframe and his busy schedule, he asked his paralegal (who once was, but is not currently, a licensed attorney) to draft the brief, and did not have time to carefully review the paralegal's draft before filing it.”

A lawyer in New York blamed vertigo, head colds, and malware

"He acknowledges that he used Westlaw supported by Google Co-Pilot which is an artificial intelligence-based tool as preliminary research aid." The lawyer “goes on to state that he had no idea that such tools could fabricate cases but acknowledges that he later came to find out the limitation of such tools. He apologized for his failure to identify the errors in his affirmation, but partly blames ‘a serious health challenge since the beginning of this year which has proven very persistent which most of the time leaves me internally cold, and unable to maintain a steady body temperature which causes me to be dizzy and experience bouts of vertigo and confusion.’ The lawyer then indicates that after finding about the ‘citation errors’ in his affirmation, he conducted a review of his office computer system and found out that his system was ‘affected by malware and unauthorized remote access.’ He says that he compared the affirmation he prepared on April 9, 2025, to the affirmation he filed to [the court] on April 21, 2025, and ‘was shocked that the cases I cited were substantially different.’”

A lawyer in Florida blames a paralegal and the fact they were doing the case pro bono (Fake cases and hallucinated quotes)

The lawyer “explained that he was handling this appeal pro bono and that as he began preparing the brief, he recognized that he lacked experience in appellate law. He stated that at his own expense, he hired ‘an independent contractor paralegal to assist in drafting the answer brief.’ He further explained that upon receipt of a draft brief from the paralegal, he read it, finalized it, and filed it with this court. He admitted that he ‘did not review the authority cited within the draft answer brief prior to filing’ and did not realize it contained AI generated content.

A lawyer in South Carolina said he was rushing (Fake cases generated by Microsoft CoPilot)

“Out of haste and a naïve understanding of the technology, he did not independently verify the sources were real before including the citations in the motion filed with the Court seeking a preliminary injunction”

A lawyer in Hawaii blames a New Yorker they hired

This lawyer was sanctioned $100 by a court for one AI-generated case, as well as quoting multiple real cases and misattributing them to that fake case. They said they had hired a per-diem attorney—“someone I had previously worked with and trusted,” they told the court—to draft the case, and though they “did not personally use AI in this case, I failed to ensure every citation was accurate before filing the brief.” The Honolulu Civil Beat reported that the per-diem attorney they hired was from New York, and that they weren’t sure if that attorney had used AI or not.

The lawyer told us over the phone that the news of their $100 sanction had blown up in their district thanks to that article. “ I was in court yesterday, and of course the [opposing] attorney somehow brought this up,” they said in a call. According to them, that attorney has also used AI in at least seven cases. Nearly every lawyer is using AI to some degree, they said; it’s just a problem if they get caught. “The judges here have seen it extensively. I know for a fact other attorneys have been sanctioned. It’s public, but unless you know what to search for, you’re not going to find it anywhere. It’s just that for some stupid reason, my matter caught the attention of a news outlet. It doesn’t help with business.”

A lawyer in Arizona blames someone they hired

A judge wrote “this is a case where the majority of authorities cited were either fabricated, misleading, or unsupported. That is egregious … this entire litigation has been derailed by Counsel’s actions. The Opening Brief was replete with citation-related deficiencies, including those consistent with artificial intelligence generated hallucinations.”

The attorney claimed “Neither I nor the supervising staff attorney knowingly submitted false or non-existent citations to the Court. The brief writer in question was experienced and credentialed, and we relied on her professionalism and prior performance. At no point did we intend to mislead the Court or submit citations not grounded in valid legal authority.”

A lawyer in Louisiana blames Westlaw (a legal research tool)

The lawyer “acknowledge[d] the cited authorities were inaccurate and mistakenly verified using Westlaw Precision, an AI-assisted research tool, rather than Westlaw’s standalone legal database.” The lawyer further wrote that she “now understands that Westlaw Precision incorporates AI-assisted research, which can generate fictitious legal authority if not independently verified. She testified she was unable to provide the Court with this research history because the lawyer who produced the AI-generated citations is currently suspended from the practice of law in Louisiana:

“In the interest of transparency and candor, counsel apologizes to the Court and opposing counsel and accepts full responsibility for the oversight. Undersigned counsel now understands that Westlaw Precision incorporates AI-assisted research, which can generate fictitious legal authority if not independently verified. Since discovering the error, all citations in this memorandum have been independently confirmed, and a Motion for Leave to amend the Motion to Transfer has been filed to withdraw the erroneous citations. Counsel has also implemented new safeguards, including manual cross-checking in non AI-assisted databases, to prevent future mistakes.”

“At the time, undersigned counsel understood these authorities to be accurate and reliable. Undersigned counsel made edits and finalized the pleading but failed to independently verify every citation before filing it. Undersigned counsel takes responsibility for this oversight.

Undersigned counsel wants the Court to know that she takes this matter extremely seriously. Undersigned counsel holds the ethical obligations of our profession in the highest regard and apologizes to opposing counsel and the Court for this mistake. Undersigned counsel remains fully committed to the ethical obligations as an officer of the court and the standards expected by this Court going forward, which is evidenced by requesting leave to strike the inaccurate citations. Most importantly, undersigned counsel has taken steps to ensure this oversight does not happen again.”

A lawyer in New York says the death of their spouse distracted them

“We understand the grave implications of misreporting case law to the Court. It is not our intention to do so, and the issue is being investigated internally in our office,” the lawyer in the case wrote.

“The Opposition was drafted by a clerk. The clerk reports that she used Google for research on the issue,” they wrote. “The Opposition was then sent to me for review and filing. I reviewed the draft Opposition but did not check the citations. I take full responsibility for failing to check the citations in the Opposition. I believe the main reason for my failure is due to the recent death of my spouse … My husband’s recent death has affected my ability to attend to the practice of law with the same focus and attention as before.”

A lawyer in California says it was ‘a legal experiment’

This is a weird one, and has to do with an AI-generated petition filed three times in an antitrust lawsuit brought against Apple by the Coronavirus Reporter Corporation. The lawyer in the case explained that he created the document as a “legal experiment.” He wrote:

“I also ‘approved for distribution’ a Petition which Apple now seeks to strike. Apple calls the Petition a ‘manifesto,’ consistent with their five year efforts to deride us. But the Court should be aware that no human ever authored the Petition for Tim Cook’s resignation, nor did any human spend more than about fifteen minutes on it. I am quite weary of Artificial Intelligence, as I am weary of Big Tech, as the Court knows. We have never done such a test before, but we thought there was an interesting computational legal experiment here.

Apple has recently published controversial research that AI LLM's are, in short, not true intelligence. We asked the most powerful commercially available AI, ChatGPT o3 Pro ‘Deep Research’ mode, a simple question: ‘Did Judge Gonzales Rogers’ rebuke of Tim Cook’s Epic conduct create a legally grounded impetus for his termination as CEO, and if so, write a petition explaining such basis, providing contextual background on critics’ views of Apple’s demise since Steve Jobs’ death.’ Ten minutes later, the Petition was created by AI. I don't have the knowledge to know whether it is indeed 'intelligent,' but I was surprised at the quality of the work—so much so that (after making several minor corrections) I approved it for distribution and public input, to promote conversation on the complex implications herein. This is a matter ripe for discussion, and I request the motion be granted.”

Lawyers in Michigan blame an internet outage

“Unfortunately, difficulties were encountered on the evening of April 4 in assembling, sorting and preparation of PDFs for the approximately 1,500 pages of exhibits due to be electronically filed by Midnight. We do use artificial intelligence to supplement their research, along with strict verification and compliance checks before filing.

AI is incorporated into all of the major research tools available, including West and Lexis, and platforms such as ChatGPT, Claude, Gemini, Grok and Perplexity. [We] do not rely on AI to write our briefs. We do include AI in their basic research and memorandums, and for checking spelling, syntax, and grammar. As Midnight approached on April 4, our computer system experienced a sudden and unexplainable loss of internet connection and loss of connection with the ECF [e-court filing] system … In the midst of experiencing these technical issues, we erred in our standard verification process and missed identifying incorrect text AI put in parentheticals in four cases in footnote 3, and one case on page 12, of the Opposition.”

Lawyers in Washington DC blame Grammarly, ProWritingAid, and an IT error

“After twenty years of using Westlaw, last summer I started using Lexis and its protege AI product as a natural language search engine for general legal propositions or to help formulate arguments in areas of the law where the courts have not spoken directly on an issue. I have never had a problem or issue using this tool and prior to recent events I would have highly recommended it. I failed to heed the warning provided by Lexis and did not double check the citations provided. Instead, I inserted the quotes, caselaw and uploaded the document to ProWritingAid. I used that tool to edit the brief and at one point used it to replace all the square brackets ( [ ) with parentheses.

In preparing and finalizing the brief, I used the following software tools: Pages with Grammarly and ProWritingAid ... through inadvertence or oversight, I was unaware quotes had been added or that I had included a case that did not actually exist … I immediately started trying to figure out what had happened. I spent all day with IT trying to figure out what went wrong.”

A lawyer in Texas blames their email, their temper, and their legal assistant

“Throughout May 2025, Counsel's office experienced substantial technology related problems with its computer and e-mail systems. As a result, a number of emails were either delayed or not received by Counsel at all. Counsel also possesses limited technological capabilities and relies on his legal assistant for filing documents and transcription - Counsel still uses a dictation phone. However, Counsel's legal assistant was out of the office on the date Plaintiffs Response was filed, so Counsel's law clerk had to take over her duties on that day (her first time filing). Counsel's law clerk had been regularly assisting Counsel with the present case and expressed that this was the first case she truly felt passionate about … While completing these items, Counsel's law clerk had various issues, including with sending opposing counsel the Joint Case Management Plan which required a phone conference to rectify. Additionally, Counsel's law clerk believed that Plaintiff’s Response to Defendant's Motion to Dismiss was also due that day when it was not.

In midst of these issues, Counsel - already missing his legal assistant - became frustrated. However, Counsel's law clerk said she had already completed Plaintiff's Response and Counsel immediately read the draft but did not thoroughly examine the cases cited therein … unbeknownst to Counsel and to his dismay, Counsel's law clerk did use artificial intelligence in drafting Plaintiff's Response. Counsel immediately instituted a strict policy prohibiting his staff from using artificial intelligence without exception - Counsel doesn't use artificial intelligence, so neither shall his staff.

Second, Counsel now requires any staff assisting in drafting documents to provide Counsel with a printout of each case cited therein with the passage(s) being relied on highlighted or marked.”

The lawyer also submitted an invoice from a company called Mainframe Computers for $480 which include line items for “Install office,” “printer not working and computer restarting,” “fixes with email and monitors and default fonts,” and “computer errors, change theme, resolution, background, and brightness.”

This post is for subscribers only


Become a member to get access to all content
Subscribe now