We’ve got the latest on GitLabs data disaster, a clever new method to cheat at the slots & a new Netgear exploit thats coming for your network!
Plus your feedback, a giant roundup & much, much more!
In this case, it was the accountants who noticed something was wrong.
What? No centralised real-time monitoring?
IN EARLY JUNE 2014, accountants at the Lumiere Place Casino in St. Louis noticed that several of their slot machines had—just for a couple of days—gone haywire. The government-approved software that powers such machines gives the house a fixed mathematical edge, so that casinos can be certain of how much they’ll earn over the long haul—say, 7.129 cents for every dollar played. But on June 2 and 3, a number of Lumiere’s machines had spit out far more money than they’d consumed, despite not awarding any major jackpots, an aberration known in industry parlance as a negative hold. Since code isn’t prone to sudden fits of madness, the only plausible explanation was that someone was cheating.
Casino security pulled up the surveillance tapes and eventually spotted the culprit, a black-haired man in his thirties who wore a Polo zip-up and carried a square brown purse. Unlike most slots cheats, he didn’t appear to tinker with any of the machines he targeted, all of which were older models manufactured by Aristocrat Leisure of Australia. Instead he’d simply play, pushing the buttons on a game like Star Drifter or Pelican Pete while furtively holding his iPhone close to the screen.
He’d walk away after a few minutes, then return a bit later to give the game a second chance. That’s when he’d get lucky. The man would parlay a $20 to $60 investment into as much as $1,300 before cashing out and moving on to another machine, where he’d start the cycle anew. Over the course of two days, his winnings tallied just over $21,000. The only odd thing about his behavior during his streaks was the way he’d hover his finger above the Spin button for long stretches before finally jabbing it in haste; typical slots players don’t pause between spins like that.
On June 9, Lumiere Place shared its findings with the Missouri Gaming Commission, which in turn issued a statewide alert. Several casinos soon discovered that they had been cheated the same way, though often by different men than the one who’d bilked Lumiere Place. In each instance, the perpetrator held a cell phone close to an Aristocrat Mark VI model slot machine shortly before a run of good fortune.
By examining rental-car records, Missouri authorities identified the Lumiere Place scammer as a 37-year-old Russian national. He had flown back to Moscow on June 6, but the St. Petersburg–based organization he worked for, which employs dozens of operatives to manipulate slot machines around the world, quickly sent him back to the United States to join another cheating crew. The decision to redeploy him to the US would prove to be a rare misstep for a venture that’s quietly making millions by cracking some of the gaming industry’s most treasured algorithms.
Russia has been a hotbed of slots-related malfeasance since 2009, when the country outlawed virtually all gambling. (Vladimir Putin, who was prime minister at the time, reportedly believed the move would reduce the power of Georgian organized crime.) The ban forced thousands of casinos to sell their slot machines at steep discounts to whatever customers they could find. Some of those cut-rate slots wound up in the hands of counterfeiters eager to learn how to load new games onto old circuit boards. Others apparently went to the supect’s bosses in St. Petersburg, who were keen to probe the machines’ source code for vulnerabilities.
By early 2011, casinos throughout central and eastern Europe were logging incidents in which slots made by the Austrian company Novomatic paid out improbably large sums. Novomatic’s engineers could find no evidence that the machines in question had been tampered with, leading them to theorize that the cheaters had figured out how to predict the slots’ behavior. “Through targeted and prolonged observation of the individual game sequences as well as possibly recording individual games, it might be possible to allegedly identify a kind of ‘pattern’ in the game results,” the company admitted in a February 2011 notice to its customers.
Recognizing those patterns would require remarkable effort. Slot machine outcomes are controlled by programs called pseudorandom number generators that produce baffling results by design. Government regulators, such as the Missouri Gaming Commission, vet the integrity of each algorithm before casinos can deploy it.
But as the “pseudo” in the name suggests, the numbers aren’t truly random. Because human beings create them using coded instructions, PRNGs can’t help but be a bit deterministic. (A true random number generator must be rooted in a phenomenon that is not manmade, such as radioactive decay.) PRNGs take an initial number, known as a seed, and then mash it together with various hidden and shifting inputs—the time from a machine’s internal clock, for example—in order to produce a result that appears impossible to forecast. But if hackers can identify the various ingredients in that mathematical stew, they can potentially predict a PRNG’s output. That process of reverse engineering becomes much easier, of course, when a hacker has physical access to a slot machine’s innards.
Knowing the secret arithmetic that a slot machine uses to create pseudorandom results isn’t enough to help hackers, though. That’s because the inputs for a PRNG vary depending on the temporal state of each machine. The seeds are different at different times, for example, as is the data culled from the internal clocks. So even if they understand how a machine’s PRNG functions, hackers would also have to analyze the machine’s gameplay to discern its pattern. That requires both time and substantial computing power, and pounding away on one’s laptop in front of a Pelican Pete is a good way to attract the attention of casino security.
On December 10, not long after security personnel spotted the suspect inside the Hollywood Casino in St. Louis, four scammers were arrested. Because he and his cohorts had pulled their scam across state lines, federal authorities charged them with conspiracy to commit fraud. The indictments represented the first significant setbacks for the St. Petersburg organization; never before had any of its operatives faced prosecution.
The Missouri and Singapore cases appear to be the only instances in which scammers have been prosecuted, though a few have also been caught and banned by individual casinos. At the same time, the St. Petersburg organization has sent its operatives farther and farther afield. In recent months, for example, at least three casinos in Peru have reported being cheated by Russian gamblers who played aging Novomatic Coolfire slot machines.
The economic realities of the gaming industry seem to guarantee that the St. Petersburg organization will continue to flourish. The machines have no easy technical fix. As Hoke notes, Aristocrat, Novomatic, and any other manufacturers whose PRNGs have been cracked “would have to pull all the machines out of service and put something else in, and they’re not going to do that.” (In Aristocrat’s statement to WIRED, the company stressed that it has been unable “to identify defects in the targeted games” and that its machines “are built to and approved against rigid regulatory technical standards.”) At the same time, most casinos can’t afford to invest in the newest slot machines, whose PRNGs use encryption to protect mathematical secrets; as long as older, compromised machines are still popular with customers, the smart financial move for casinos is to keep using them and accept the occasional loss to scammers.
So the onus will be on casino security personnel to keep an eye peeled for the scam’s small tells. A finger that lingers too long above a spin button may be a guard’s only clue that hackers in St. Petersburg are about to make another score.
- This came to our attention from Shawn
- For most people, routers are the little boxes which sit between you and your ISP. They do NAT, possibly firewall, and general stop the outside world from getting in without your permission. Well, that’s what they are supposed to do. The issue, long standing, is updates. When vulnerabilities are found, the code needs to be patched. With these devices, that issues can be troublesome, given that everyday consumers cannot be expected to update them. For us geeks, this isn’t so much as an issue, if the updates are made available to us
- We patch our own systems already, patching the firmware on a device… we can do that too.
- The vast majority of router users are unaware that they require an update. They sit there waiting, and sometimes they are found. When they are found to have a vulnerability, they can become part of a bot-net, a huge collection of devices ready to do the bidding of those with ill-intent. These bot-nets can be used for a variety of malicious purposes. Why do this? Most often, it’s money.
- This story is about someone discovering a problem with their router, and then exploring it.
This also came from Shawn
Source-code hub GitLab.com is in meltdown after experiencing data loss as a result of what it has suddenly discovered are ineffectual backups.
On Tuesday evening, Pacific Time, the startup issued a sobering series of tweets we’ve listed below. Behind the scenes, a tired sysadmin, working late at night in the Netherlands, had accidentally deleted a directory on the wrong server during a frustrating database replication process: he wiped a folder containing 300GB of live production data that was due to be replicated.
Just 4.5GB remained by the time he canceled the rm -rf command. The last potentially viable backup was taken six hours beforehand.
That Google Doc mentioned in the last tweet notes: “This incident affected the database (including issues and merge requests) but not the git repos (repositories and wikis).”
So some solace there for users because not all is lost. But the document concludes with the following:
So in other words, out of 5 backup/replication techniques deployed none are working reliably or set up in the first place.
The world doesn’t contain enough faces and palms to even begin to offer a reaction to that sentence. Or, perhaps, to summarise the mistakes the startup candidly details as follows:
LVM snapshots are by default only taken once every 24 hours. YP happened to run one manually about 6 hours prior to the outage
Regular backups seem to also only be taken once per 24 hours, though YP has not yet been able to figure out where they are stored. According to JN these don’t appear to be working, producing files only a few bytes in size.
SH: It looks like pg_dump may be failing because PostgreSQL 9.2 binaries are being run instead of 9.6 binaries. This happens because omnibus only uses Pg 9.6 if data/PG_VERSION is set to 9.6, but on workers this file does not exist. As a result it defaults to 9.2, failing silently. No SQL dumps were made as a result. Fog gem may have cleaned out older backups.
Disk snapshots in Azure are enabled for the NFS server, but not for the DB servers.
The synchronisation process removes webhooks once it has synchronised data to staging. Unless we can pull these from a regular backup from the past 24 hours they will be lost
The replication procedure is super fragile, prone to error, relies on a handful of random shell scripts, and is badly documented
Our backups to S3 apparently don’t work either: the bucket is empty
Making matters worse is the fact that GitLab last year decreed it had outgrown the cloud and would build and operate its own Ceph clusters. GitLab’s infrastructure lead Pablo Carranza said the decision to roll its own infrastructure “will make GitLab more efficient, consistent, and reliable as we will have more ownership of the entire infrastructure.”
See also GitLab.com Database Incident
see also Catastrophic Failure – Myth Weavers – My thanks to Rikai for bringing this to our attention.
example of why making sure your backup solution is solid as hell is extremely important
The guy is completly honest and takes ownership of the mistakes he made. Hopefully others can learn from his mistakes.
For context, myth-weavers is a website that handles things like the creation/managing and sharaing of D&D (and other tabletop RPG) character sheets online ( https://www.myth-weavers.com/sheetindex.php ), they lost about 6 months of data.
Backup automation is good, because people will fail and skip steps more often than computers will, and this is a perfect example of that.
The trick is getting it done RIGHT and having it NOTIFY you when something ISN’T right. As well as making it consistent, reproducible and redundant if possible. This is also an example of why if you have data you care about, that step should not be skipped.
Automated backups are a lot of up-front work that people often avoid doing, at least partially and regret it later. This is a well documented postmortem of what happens when you do that and why you should set aside the time and get it done
Not exactly mission-critical data, but still very important data for the audience they cater too. Handcrafted, imagination-related kinda stuff