Discover more from Threats Without Borders
Threats Without Borders - Issue 148
Cyber Financial Crime Investigation Newsletter, week ending September 17, 2023
AI is making me a worse writer!
Artificial Intelligence and computer generative writing tools like Bing Chat, ChatGPT, and Google Bard are reducing the quality of my writing. I’m so paranoid about being accused of publishing AI-produced material that I intentionally leave grammar errors in my work to indicate that it was human-created.
I write a lot. The Threats Without Borders newsletter alone averages 1500 to 2000 words each week and I also publish a monthly newsletter for my employer. Additionally, I write all of my speeches, presentations, and class materials. There are also all of the compliance reports and dozens, maybe hundreds, of emails each month.
I also ghostwrite and throw my opinion around on LinkedIn, Hacker News, and Reddit, but that doesn’t always have my name attached.
So yeah, I put a lot of words to the screen every week, every month, every year!
I lean on a few AI tools for help but only to make my thoughts more consumable. They may not be agreeable, but I at least try to get them to be readable.
For instance, I write the Threats Without Borders newsletter in a writing application called Ulysses. The text then gets copied into Grammarly for editing. Grammarly is an AI-assisted tool that corrects spelling, grammar, and punctuation. It also checks for clarity and tone and allows me to customize the writing to my voice and style. It does not add AI-generated content to my submission. My goal is for a score of 85% or above so I’ll rework the writing in the Grammarly editor until it meets that standard. I then copy and paste that text into the Substack editor where it gets an additional level of review.
By this time, the text is looking pretty good, Sometimes too good. And that’s when I start to get paranoid.
50% of a cop's job is writing reports. The transfer to a criminal investigator position moves the documentation task of the job to probably 75%. I’ve written a lot of reports over 24 years. I’m also an academic with a graduate degree and now teach at the college level. Again, I’ve put a lot of words to a keyboard, and sometimes, paper.
But even with all that experience sharing my observations, thoughts, and beliefs, through written text, I regularly suffer from self-doubt. Sometimes at extreme levels. I still see myself as the slacker from East High who barely graduated and was required to take the remedial English class when he finally made it to community college.
I often read my content and think “There’s no way anyone is going to believe you wrote this!”. And then paranoia and anxiety sets in. The fear of being accused of plagiarism or called an AI-generated fraud is overwhelming. So I allow errors to exist, even when my tools scream the correction at me.
The proponents of artificial intelligence claim it is the fix for just about everything. AI-assisted writing will make your stories more engaging, concise, and efficient - seriously, just ask it.
Efficiency is important to me but so is authenticity. I prefer my writing to look and read like it was created by me, a human, not a machine. AI is the solution to many things, but I’ll keep my writing as AI-free as possible.
I suspect I’m not the only writer doing so!
Pet scams and Google pin-code fraud have been featured in the newsletter. This article discussing scammers targeting owners of lost pets involves both. Using lost pet ads to validate phone numbers for creating VOIP accounts is new to me, but it’s a smart way to do it. People searching for a lost fur-kid are scared, desperate, and vulnerable. https://www.pennlive.com/news/2023/09/scammers-prey-on-pet-owners-looking-for-lost-animals-i-found-your-cat.html
The ALPHV/Blackcat ransomware group attacked MGM Resorts and extorted them for millions of dollars. Oddly enough, Caesar’s was also hit but ALPHV/Blackcat denied being responsible the attack. There is no need to cover it much because it’s been on the top of every news website for the past week. I will highlight that it wasn’t a very sophisticated attack and the group has openly bragged about how they did. They socially engineered employees of MGM’s call center to get MFA resets. Yes, calling them, like on the phone. Pay to train your employees now, or pay the ransom demand later. https://www.engadget.com/hackers-claim-it-only-took-a-10-minute-phone-call-to-shutdown-mgm-resorts-143147493.html
Caesar’s for filed their 8-K with the Securities and Exchange Commission (SEC) in a timely manner. https://investor.caesars.com/static-files/0bc13ee5-34a9-402e-8e7a-824b9dba4e57
The FBI, CISA, and the NSA, released a shared advisory concerning the present challenge of “synthetic media”, aka deepfakes. At this point, I’m suspicious of anything the government produces, but this is a good product. So good in fact, that it makes me even more suspicious! Regardless of who wrote it, or why, it’s well worth your time. https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF
A federal court in Virginia has convicted a Chinese national on multiple charges stemming from an international gift card scheme that raked in over seven million dollars. From the press release: “Qinbin Chen, 29, masterminded a criminal conspiracy that obtained, trafficked, used, and laundered gift cards and debit cards purchased by victims, who were mostly elderly, from across the United States. The victims were manipulated into buying Walmart gift cards by fraudsters who told the victims a range of lies, such as their social security numbers had been compromised, their bank accounts had been hacked, or there was an issue with their computer software.”. https://www.justice.gov/usao-edva/pr/leader-international-gift-card-fraud-scheme-convicted
The founder of the Key Biscayne investment group has pled guilty to various criminal offenses that fleeced clients out of 115 Million Dollars. The press release is scant on details but it sounds like a pyramid scheme. https://www.miamiherald.com/news/local/crime/article279341084.html
European cybersec company Sekoia has written a completely thorough review of ransomware incidents in the first half of 2023 and examines the current threats to corporate networks. https://blog.sekoia.io/sekoia-io-mid-2023-ransomware-threat-landscape
Director of Fraud - Fanatics. https://jobs.lever.co/fanatics/85db3023-2ba1-4572-9d80-e4fe5260ad0b
Investigate that web resource. https://www.criminalip.io/
Create some visuals. https://www.picyard.in/
iOS Superguide - a complete rundown of the update. https://www.macworld.com/article/1519552/ios-17-release-date-features-compatibility-beta.html
Badger’s Law – “any website with the word “Truth” in the URL has none in the posted content.”
Thanks for reading the newsletter and giving me some of your time! Last weeks issue almost achieved a new high for views, but unfortunately didn’t correlate to new subscribers. I appreciate everyone who takes time to read the newsletter each week but if you’re not a subscriber - please consider becoming one. I can’t promise the email will actually get delivered to your inbox (looking at you Yahoo), but higher subscriber numbers improves the status of the newsletter with Substack.
See you next Tuesday.
“YOU DON’T GET BETTER BY NOT DOING SOMETHING.”
Legal: I am not compensated by any entity for writing this newsletter. Obviously, anything written in this space is my own nonsensical opinions and doesn’t represent the official viewpoint of my employer or any associated organization. Blame me, not them.