224 stories
·
1 follower

v0.1.16

1 Share

The most requested feature finally implemented. Tab Completion and command hinting, enjoy

Read the whole story
FrostedGlitch
1011 days ago
reply
Share this story
Delete

xone 0.2

1 Share

Added

  • Compatibility with more vendors:
    • Turtle Beach (0x10f5)
    • Hyperkin (0x2e24)
    • Nacon (0x3285)
    • BDA (0x20d6)
    • 8BitDo (0x2dc8)
  • Share button input for Xbox Series X|S gamepads
  • USB remote wakeup support (for gamepads)
  • Guide button LED mode control via sysfs
  • Fully featured driver for the wireless dongle 🎉

Fixed

  • Build on newer kernels (5.15+)
  • Sporadic headset malfunction due to ENOSPC
Read the whole story
FrostedGlitch
1011 days ago
reply
Share this story
Delete

How a Docker footgun led to a vandal deleting NewsBlur’s MongoDB database

11 Comments and 19 Shares

tl;dr: A vandal deleted NewsBlur’s MongoDB database during a migration. No data was stolen or lost.

I’m in the process of moving everything on NewsBlur over to Docker containers in prep for a big redesign launching next week. It’s been a great year of maintenance and I’ve enjoyed the fruits of Ansible + Docker for NewsBlur’s 5 database servers (PostgreSQL, MongoDB, Redis, Elasticsearch, and soon ML models). The day was wrapping up and I settled into a new book on how to tame the machines once they’re smarter than us when I received a strange NewsBlur error on my phone.

"query killed during yield: renamed collection 'newsblur.feed_icons' to 'newsblur.system.drop.1624498448i220t-1.feed_icons'"

There is honestly no set of words in that error message that I ever want to see again. What is drop doing in that error message? Better go find out.

Logging into the MongoDB machine to check out what state the DB is in and I come across the following…

nbset:PRIMARY> show dbs
READ__ME_TO_RECOVER_YOUR_DATA   0.000GB
newsblur                        0.718GB

nbset:PRIMARY> use READ__ME_TO_RECOVER_YOUR_DATA
switched to db READ__ME_TO_RECOVER_YOUR_DATA
    
nbset:PRIMARY> db.README.find()
{ 
    "_id" : ObjectId("60d3e112ac48d82047aab95d"), 
    "content" : "All your data is a backed up. You must pay 0.03 BTC to XXXXXXFTHISGUYXXXXXXX 48 hours for recover it. After 48 hours expiration we will leaked and exposed all your data. In case of refusal to pay, we will contact the General Data Protection Regulation, GDPR and notify them that you store user data in an open form and is not safe. Under the rules of the law, you face a heavy fine or arrest and your base dump will be dropped from our server! You can buy bitcoin here, does not take much time to buy https://localbitcoins.com or https://buy.moonpay.io/ After paying write to me in the mail with your DB IP: FTHISGUY@recoverme.one and you will receive a link to download your database dump." 
}

Two thoughts immediately occured:

  1. Thank goodness I have some recently checked backups on hand
  2. No way they have that data without me noticing

Three and a half hours before this happened, I switched the MongoDB cluster over to the new servers. When I did that, I shut down the original primary in order to delete it in a few days when all was well. And thank goodness I did that as it came in handy a few hours later. Knowing this, I realized that the hacker could not have taken all that data in so little time.

With that in mind, I’d like to answer a few questions about what happened here.

  1. Was any data leaked during the hack? How do you know?
  2. How did NewsBlur’s MongoDB server get hacked?
  3. What will happen to ensure this doesn’t happen again?

Let’s start by talking about the most important question of all which is what happened to your data.

1. Was any data leaked during the hack? How do you know?

I can definitively write that no data was leaked during the hack. I know this because of two different sets of logs showing that the automated attacker only issued deletion commands and did not transfer any data off of the MongoDB server.

Below is a snapshot of the bandwidth of the db-mongo1 machine over 24 hours:

You can imagine the stress I experienced in the forty minutes between 9:35p, when the hack began, and 10:15p, when the fresh backup snapshot was identified and put into gear. Let’s breakdown each moment:

  1. 6:10p: The new db-mongo1 server was put into rotation as the MongoDB primary server. This machine was the first of the new, soon-to-be private cloud.
  2. 9:35p: Three hours later an automated hacking attempt opened a connection to the db-mongo1 server and immediately dropped the database. Downtime ensued.
  3. 10:15p: Before the former primary server could be placed into rotation, a snapshot of the server was made to ensure the backup would not delete itself upon reconnection. This cost a few hours of downtime, but saved nearly 18 hours of a day’s data by not forcing me to go into the daily backup archive.
  4. 3:00a: Snapshot completes, replication from original primary server to new db-mongo1 begins. What you see in the next hour and a half is what the transfer of the DB looks like in terms of bandwidth.
  5. 4:30a: Replication, which is inbound from the old primary server, completes, and now replication begins outbound on the new secondaries. NewsBlur is now back up.

The most important bit of information the above chart shows us is what a full database transfer looks like in terms of bandwidth. From 6p to 9:30p, the amount of data was the expected amount from a working primary server with multiple secondaries syncing to it. At 3a, you’ll see an enormous amount of data transfered.

This tells us that the hacker was an automated digital vandal rather than a concerted hacking attempt. And if we were to pay the ransom, it wouldn’t do anything because the vandals don’t have the data and have nothing to release.

We can also reason that the vandal was not able to access any files that were on the server outside of MongoDB due to using a recent version of MongoDB in a Docker container. Unless the attacker had access to a 0-day to both MongoDB and Docker, it is highly unlikely they were able to break out of the MongoDB server connection.

While the server was being snapshot, I used that time to figure out how the hacker got in.

2. How did NewsBlur’s MongoDB server get hacked?

Turns out the ufw firewall I enabled and diligently kept on a strict allowlist with only my internal servers didn’t work on a new server because of Docker. When I containerized MongoDB, Docker helpfully inserted an allow rule into iptables, opening up MongoDB to the world. So while my firewall was “active”, doing a sudo iptables -L | grep 27017 showed that MongoDB was open the world. This has been a Docker footgun since 2014.

To be honest, I’m a bit surprised it took over 3 hours from when I flipped the switch to when a hacker/vandal dropped NewsBlur’s MongoDB collections and pretended to ransom about 250GB of data. This is the work of an automated hack and one that I was prepared for. NewsBlur was back online a few hours later once the backups were restored and the Docker-made hole was patched.

It would make for a much more dramatic read if I was hit through a vulnerability in Docker instead of a footgun. By having Docker silently override the firewall, Docker has made it easier for developers who want to open up ports on their containers at the expense of security. Better would be for Docker to issue a warning when it detects that the most popular firewall on Linux is active and filtering traffic to a port that Docker is about to open.

The second reason we know that no data was taken comes from looking through the MongoDB access logs. With these rich and verbose logging sources we can invoke a pretty neat command to find everybody who is not one of the 100 known NewsBlur machines that has accessed MongoDB.


$ cat /var/log/mongodb/mongod.log | egrep -v "159.65.XX.XX|161.89.XX.XX|<< SNIP: A hundred more servers >>"

2021-06-24T01:33:45.531+0000 I NETWORK  [listener] connection accepted from 171.25.193.78:26003 #63455699 (1189 connections now open)
2021-06-24T01:33:45.635+0000 I NETWORK  [conn63455699] received client metadata from 171.25.193.78:26003 conn63455699: { driver: { name: "PyMongo", version: "3.11.4" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-74-generic" }, platform: "CPython 3.8.5.final.0" }
2021-06-24T01:33:46.010+0000 I NETWORK  [listener] connection accepted from 171.25.193.78:26557 #63455724 (1189 connections now open)
2021-06-24T01:33:46.092+0000 I NETWORK  [conn63455724] received client metadata from 171.25.193.78:26557 conn63455724: { driver: { name: "PyMongo", version: "3.11.4" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-74-generic" }, platform: "CPython 3.8.5.final.0" }
2021-06-24T01:33:46.500+0000 I NETWORK  [conn63455724] end connection 171.25.193.78:26557 (1198 connections now open)
2021-06-24T01:33:46.533+0000 I NETWORK  [conn63455699] end connection 171.25.193.78:26003 (1200 connections now open)
2021-06-24T01:34:06.533+0000 I NETWORK  [listener] connection accepted from 185.220.101.6:10056 #63456621 (1266 connections now open)
2021-06-24T01:34:06.627+0000 I NETWORK  [conn63456621] received client metadata from 185.220.101.6:10056 conn63456621: { driver: { name: "PyMongo", version: "3.11.4" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-74-generic" }, platform: "CPython 3.8.5.final.0" }
2021-06-24T01:34:06.890+0000 I NETWORK  [listener] connection accepted from 185.220.101.6:21642 #63456637 (1264 connections now open)
2021-06-24T01:34:06.962+0000 I NETWORK  [conn63456637] received client metadata from 185.220.101.6:21642 conn63456637: { driver: { name: "PyMongo", version: "3.11.4" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-74-generic" }, platform: "CPython 3.8.5.final.0" }
2021-06-24T01:34:08.018+0000 I COMMAND  [conn63456637] dropDatabase config - starting
2021-06-24T01:34:08.018+0000 I COMMAND  [conn63456637] dropDatabase config - dropping 1 collections
2021-06-24T01:34:08.018+0000 I COMMAND  [conn63456637] dropDatabase config - dropping collection: config.transactions
2021-06-24T01:34:08.020+0000 I STORAGE  [conn63456637] dropCollection: config.transactions (no UUID) - renaming to drop-pending collection: config.system.drop.1624498448i1t-1.transactions with drop optime { ts: Timestamp(1624498448, 1), t: -1 }
2021-06-24T01:34:08.029+0000 I REPL     [replication-14545] Completing collection drop for config.system.drop.1624498448i1t-1.transactions with drop optime { ts: Timestamp(1624498448, 1), t: -1 } (notification optime: { ts: Timestamp(1624498448, 1), t: -1 })
2021-06-24T01:34:08.030+0000 I STORAGE  [replication-14545] Finishing collection drop for config.system.drop.1624498448i1t-1.transactions (no UUID).
2021-06-24T01:34:08.030+0000 I COMMAND  [conn63456637] dropDatabase config - successfully dropped 1 collections (most recent drop optime: { ts: Timestamp(1624498448, 1), t: -1 }) after 7ms. dropping database
2021-06-24T01:34:08.032+0000 I REPL     [replication-14546] Completing collection drop for config.system.drop.1624498448i1t-1.transactions with drop optime { ts: Timestamp(1624498448, 1), t: -1 } (notification optime: { ts: Timestamp(1624498448, 5), t: -1 })
2021-06-24T01:34:08.041+0000 I COMMAND  [conn63456637] dropDatabase config - finished
2021-06-24T01:34:08.398+0000 I COMMAND  [conn63456637] dropDatabase newsblur - starting
2021-06-24T01:34:08.398+0000 I COMMAND  [conn63456637] dropDatabase newsblur - dropping 37 collections

<< SNIP: It goes on for a while... >>

2021-06-24T01:35:18.840+0000 I COMMAND  [conn63456637] dropDatabase newsblur - finished

The above is a lot, but the important bit of information to take from it is that by using a subtractive filter, capturing everything that doesn’t match a known IP, I was able to find the two connections that were made a few seconds apart. Both connections from these unknown IPs occured only moments before the database-wide deletion. By following the connection ID, it became easy to see the hacker come into the server only to delete it seconds later.

Interestingly, when I visited the IP address of the two connections above, I found a Tor exit router:

This means that it is virtually impossible to track down who is responsible due to the anonymity-preserving quality of Tor exit routers. Tor exit nodes have poor reputations due to the havoc they wreak. Site owners are split on whether to block Tor entirely, but some see the value of allowing anonymous traffic to hit their servers. In NewsBlur’s case, because NewsBlur is a home of free speech, allowing users in countries with censored news outlets to bypass restrictions and get access to the world at large, the continuing risk of supporting anonymous Internet traffic is worth the cost.

3. What will happen to ensure this doesn’t happen again?

Of course, being in support of free speech and providing enhanced ways to access speech comes at a cost. So for NewsBlur to continue serving traffic to all of its worldwide readers, several changes have to be made.

The first change is the one that, ironically, we were in the process of moving to. A VPC, a virtual private cloud, keeps critical servers only accessible from others servers in a private network. But in moving to a private network, I need to migrate all of the data off of the publicly accessible machines. And this was the first step in that process.

The second change is to use database user authentication on all of the databases. We had been relying on the firewall to provide protection against threats, but when the firewall silently failed, we were left exposed. Now who’s to say that this would have been caught if the firewall failed but authentication was in place. I suspect the password needs to be long enough to not be brute-forced, because eventually, knowing that an open but password protected DB is there, it could very possibly end up on a list.

Lastly, a change needs to be made as to which database users have permission to drop the database. Most database users only need read and write privileges. The ideal would be a localhost-only user being allowed to perform potentially destructive actions. If a rogue database user starts deleting stories, it would get noticed a whole lot faster than a database being dropped all at once.

But each of these is only one piece of a defense strategy. As this well-attended Hacker News thread from the day of the hack made clear, a proper defense strategy can never rely on only one well-setup layer. And for NewsBlur that layer was a allowlist-only firewall that worked perfectly up until it didn’t.

As usual the real heros are backups. Regular, well-tested backups are a necessary component to any web service. And with that, I’ll prepare to launch the big NewsBlur redesign later this week.

Read the whole story
FrostedGlitch
1232 days ago
reply
Share this story
Delete
11 public comments
creditappear
800 days ago
reply
https://creditappear.com/
seriousben
1231 days ago
reply
Great root cause analysis of a security incident.
Canada
chrisrosa
1233 days ago
reply
Great write up Samuel. And kudos for your swift and effective response.
San Francisco, CA
jshoq
1233 days ago
reply
This is a great account on how to recover a service from a major outage. In this case, NewsBlur was attacked by a scripter that used a well known hole to attack the system. In this case, a well planned and validated backup setup helped NewsBlur to get their service back online quickly. This is a great read of a blameless post mortem executed well.
JS
Seattle, WA
jqlive
1233 days ago
reply
Thanks for the write up, it was interesting to read and very transparent of you. It would be an interesting read to know how you'll be applying ML Models to Newsblur.
CN/MX
samuel
1233 days ago
reply
What a week. In other news, new blog design launched!
Cambridge, Massachusetts
deezil
1233 days ago
Thanks for being above-board with all this! The HackerNews comment section was a little brutal towards you about some things, but I like that you've been transparent about everything.
samuel
1233 days ago
HN only knows how to be brutal, which I always appreciate.
acdha
1232 days ago
Thanks for writing this up. That foot-gun really needs fixing.
BLueSS
1233 days ago
reply
Thanks, Samuel, for your hard work and efforts keeping NewsBlur alive!
jepler
1233 days ago
reply
My most commented HN story yet :)
Earth, Sol system, Western spiral arm
jgbishop
1233 days ago
reply
Nice writeup.
Durham, NC
fxer
1233 days ago
reply
> the hacker come into the server only to delete it seconds later.

> This tells us that the hacker was an automated digital vandal rather than a concerted hacking attempt. And if we were to pay the ransom, it wouldn’t do anything because the vandals don’t have the data and have nothing to release.

Guess they count on users not having enough monitoring to be able to confirm no data was exfil’d
Bend, Oregon
DMack
1233 days ago
I remember reading about the mongodb image's terrible defaults ON newsblur, probably even one of your shares. Very surprised to learn that it's still a thing, especially after the waves it made back then
JayM
1233 days ago
reply
Bummer. But glad all was well in the end. Yay backups.
Atlanta, GA

v1.1.0

1 Share

First and foremost, thank you to 3mux's new co-maintainer @PotatoParser. He has significantly improved the code quality within 3mux, and he's to thank for a lot of the stability introduced in this release, enough that we finally feel comfortable to tagging an official v1 release.

This is the first official release since v0.3.0, and it includes significant performance, reliability, and usability improvements over the pre-release v1.0.1.

Migration note: the v1.1.0 client can connect to v1.0.1 sessions, but the client freezes upon running 3mux detach. No session data should be lost, but the client terminal will likely have to be restarted. Future releases will aim for better backward compatibility than this.

Build Support:

  • 3mux can now be run through Nix flakes (requires Nix 2.4+) (#114)

Performance fixes:

  • 3mux uses much less resources while idle, and it's also much more responsive (#106)
  • 3mux is more efficient in general (#104 #97)

Display fixes:

  • htop, nano and kakoune now work significantly better (#107)
  • Colors are more reliably handled, fixing issues seen when using bash (#94)
  • An issue was fixed where characters were disappearing in readline (#95)
  • The interactive session chooser prompt no longer messes up prompt upon Ctrl-C (#72)
  • Line wrapping works better now (#77)
  • Help page no longer depends upon tab width settings (#109)
  • Fixed bug that caused pane divider lines to disappear (#118)
  • Properly scroll when wrapping bottom right corner (#116)

Error handling:

  • Errors are now more carefully handled to avoid broken states (#82 #100)
  • Fuzzing is more extensively used to fing bugs (e.g. #92)

Changes:

  • Cleaner UI and implementation for sessions (#91)
Read the whole story
FrostedGlitch
1341 days ago
reply
Share this story
Delete

Release 10.7.0

1 Comment

Jellyfin 10.7.0

Stable release for 10.7.0

GitHub project for release: https://github.com/orgs/jellyfin/projects/27

Binary assets: https://repo.jellyfin.org/releases/server

User-facing Features

  • SyncPlay for TV shows and Music
  • Significantly improved web performance due to ES6 upgrades, Webpack, and assets served with gzip compression
  • Migration of further databases to new EFCore dtabase framework
  • Redesigned OSD and Up Next dialog
  • New PDF reader functionality
  • New Comics (cbz/cbr) reader functionality
  • New HDR thumbnails extraction functionality
  • New HDR Tone mapping functionality with Nvidia NVENC, AMD AMF and Intel VAAPI (additional configuration is required)
  • HEVC remuxing or transcoding over fMP4 on supported Apple devices (disabled by default)
  • Allow custom fonts to be used for ASS/SSA subtitle rendering
  • New defaullt library image style (generated on library scans)
  • New QuickConnect functionality for (disabled by default)
  • Support for limiting the number of user sessions
  • Support for uploading subtitles
  • Improved networking backend
  • Upgrade to .NET SDK 5.0 for improved performance in the backend
  • Fix issues with reboot script on Linux with Systemd
  • Various fixes for iOS Safari and Edge Chromium browsers
  • Various transcoding improvements
  • Various bugfixes and minor improvements
  • Various code cleanup
  • Updated and improved plugin management interface, prevents bugs when upgrading as well as improve functionality
  • Fixes some bugs with DLNA

Release Notes

  • [ALL] Non-reversable database changes. Ensure you back up before upgrading.
  • [ALL] TVDB support has been removed from the core server. If TVDB metadata was enabled on a library, this will be disabled. TVDB support can now be obtained through a separate plugin available in the official Plugin Catalog.
  • [ALL] If you use a reverse proxy with X-Forwarded-For, and have a static proxy IP, consider setting this option in the Networking admin tab for more reliable parsing.

Client/Plugin (API/ABI) Developer Notes

  • We have migrated from ServiceStack to ASP.NET. Web API endpoints no longer accept HTTP Form requests; everything must be application/json. NOTE: Plugins that implement endpoints will also have to migrate.
  • Plugins must now target net5.0.
  • IHttpClient removal: Now inject IHttpClientFactory instead.
  • HttpException removal: Now catch HttpResponseException instead.
  • Services can be registered to the DI pipeline.

Please see the Jellyfin Development Matrix channel for questions or further details on these changes.

Known Bugs/Tracker for 10.7.1 hotfix

Bugs which are already known and being worked on are listed in this issue: jellyfin/jellyfin-meta#1

Changelog

GitHub Project: https://github.com/orgs/jellyfin/projects/27

jellyfin [599]

jellyfin-web [474]

Read the whole story
FrostedGlitch
1345 days ago
reply
Jellyfin's biggest release!!
Share this story
Delete

Developing an official iOS app for Mastodon

1 Comment and 2 Shares
One of the ways Mastodon sets itself apart from current-day Twitter is its API-first approach (every function available through the web interface is available through the API, in fact, our web client is just an API client that runs in the browser). A third-party app ecosystem contributed in large part to Twitter’s success at the beginning, with many innovative features like retweets coming originally from unofficial apps, and it is serving a similarly instrumental role for Mastodon.
Read the whole story
FrostedGlitch
1375 days ago
reply
Share this story
Delete
1 public comment
lamnatos
1376 days ago
reply
Oh, that's an interesting development.
Athens, Greece
BalooUriza
1376 days ago
Indeed. Hopefully they don't make the same mistake as every other fediverse app and do the reverse chronological thing as a sort order you can't change, and then fail to include an offline mode.
fxer
1376 days ago
What’s wrong with newest-to-oldest, even Newsblur does that
BalooUriza
1375 days ago
@fxer: You can change it in Newsblur. People read and skim easiest in chronological order, oldest first. When it's the other direction you end up reading a paragraph, and then the paragraph before it is *after* it in the list. Who the fuck reads that way comfortably? Nobody.
BalooUriza
1375 days ago
Actually newsblur is a good example of how to do it the right way. Scrolling marks items as read, and you have the ability to switch directions to oldest first, and it automatically remembers where it left off. There's no reason *not* to do it this way...
Next Page of Stories