btp

joined 1 year ago
 

A controversial developer circumvented one of Mastodon's primary tools for blocking bad actors, all so that his servers could connect to Threads.

We’ve criticized the security and privacy mechanisms of Mastodon in the past, but this new development should be eye-opening. Alex Gleason, the former Truth Social developer behind Soapbox and Rebased, has come up with a sneaky workaround to how Authorized Fetch functions: if your domain is blocked for a fetch, just sign it with a different domain name instead.

Gleason was originally investigating Threads federation to determine whether or not a failure to fetch posts indicated a software compatibility issue, or if Threads had blocked his server. After checking some logs and experimenting, he came to a conclusion.

“Fellas,” Gleason writes, “I think threads.net might be blocking some servers already.”

What Alex found was that Threads attempts to verify domain names before allowing access to a resource, a very similar approach to what Authorized Fetch does in Mastodon.

You can see Threads fetching your own server by looking at the facebookexternalua user agent. Try this command on your server:

grep facebookexternalua /var/log/nginx/access.log

If you see logs there, that means Threads is attempting to verify your signatures and allow you to access their data.

 

“Verizon royally fucked up,” Poppy told me in a phone call. “There’s no way around it.” Verizon, she added, was “100% at fault.”

Verizon handed Poppy’s personal data, including the address on file and phone logs, to a stalker who later directly threatened her and drove to an address armed with a knife. Police then arrested the suspect, Robert Michael Glauner, who is charged with fraud and stalking offenses, but not before he harassed Poppy, her family, friends, workplace, and daughter’s therapist, Poppy added. 404 Media has changed Poppy’s name to protect her identity.

Glauner’s alleged scheme was not sophisticated in the slightest: he used a ProtonMail account, not a government email, to make the request, and used the name of a police officer that didn’t actually work for the police department he impersonated, according to court records. Despite those red flags, Verizon still provided the sensitive data to Glauner.

Remarkably, in a text message to Poppy sent during the fallout of the data transfer, a Verizon representative told Poppy that the corporation was a victim too. “Whoever this is also victimized us,” the Verizon representative wrote, according to a copy of the message Poppy shared with 404 Media. “We are taking every step possible to work with the police so they can identify them.”

In the interview with 404 Media, Poppy pointed out that Verizon is a multi-billion dollar company and yet still made this mistake. “They need to get their shit together,” she said.

 

Comcast has confirmed that hackers exploiting a critical-rated security vulnerability accessed the sensitive information of almost 36 million Xfinity customers.

This vulnerability, known as “CitrixBleed,” is found in Citrix networking devices often used by big corporations and has been under mass-exploitation by hackers since late August. Citrix made patches available in early October, but many organizations did not patch in time. Hackers have used the CitrixBleed vulnerability to hack into big-name victims, including aerospace giant Boeing, the Industrial and Commercial Bank of China, and international law firm Allen & Overy.

Comcast's statement

Notice To Customers of Data Security Incident
December 18, 2023 04:30 PM Eastern Standard Time

PHILADELPHIA--(BUSINESS WIRE)--Xfinity is providing notice of a recent data security incident. Starting today, customers are being notified through a variety of channels, including through the Xfinity website, email, and news media.

On October 10, 2023, Citrix announced a vulnerability in software used by Xfinity and thousands of other companies worldwide. Citrix issued additional mitigation guidance on October 23, 2023. Xfinity promptly patched and mitigated the Citrix vulnerability within its systems. However, during a routine cybersecurity exercise on October 25, Xfinity discovered suspicious activity and subsequently determined that between October 16 and October 19, 2023, there was unauthorized access to its internal systems that was concluded to be a result of this vulnerability.

Xfinity notified federal law enforcement and initiated an investigation into the nature and scope of the incident. On November 16, Xfinity determined that information was likely acquired. After additional review of the affected systems and data, Xfinity concluded on December 6, 2023, that the customer information in scope included usernames and hashed passwords; for some customers, other information may also have been included, such as names, contact information, last four digits of social security numbers, dates of birth and/or secret questions and answers. However, the data analysis is continuing.

Xfinity has required customers to reset their passwords to protect affected accounts. In addition, Xfinity strongly recommends that customers enable two-factor or multi-factor authentication to secure their Xfinity account, as many Xfinity customers already do. While Xfinity advises customers not to re-use passwords across multiple accounts, the company is recommending that customers change passwords for other accounts for which they use the same username and password or security question.

Customers with questions can contact Xfinity’s dedicated call center at 888-799-2560 toll-free 24 hours a day, seven days a week. More information is available on the Xfinity website at www.xfinity.com/dataincident.

Customers trust Xfinity to protect their information, and the company takes this responsibility seriously. Xfinity remains committed to continued investment in technology, protocols and experts dedicated to helping to protect its customers.

 

We answer the questions readers asked in response to our guide to anonymizing your phone

About the LevelUp series: At The Markup, we’re committed to doing everything we can to protect our readers from digital harm, write about the processes we develop, and share our work. We’re constantly working on improving digital security, respecting reader privacy, creating ethical and responsible user experiences, and making sure our site and tools are accessible.

This is a follow-up article. Here's the first piece, if you'd like to read that one as well

[–] btp@kbin.social 13 points 9 months ago (1 children)

First, a quick primer on the tech: ACR identifies what’s displayed on your television, including content served through a cable TV box, streaming service, or game console, by continuously grabbing screenshots and comparing them to a massive database of media and advertisements. Think of it as a Shazam-like service constantly running in the background while your TV is on.

All of this is in the second paragraph of the article.

 

These TVs can capture and identify 7,200 images per hour, or approximately two every second. The data is then used for content recommendations and ad targeting, which is a huge business; advertisers spent an estimated $18.6 billion on smart TV ads in 2022, according to market research firm eMarketer.

 

https://micronews.debian.org/2023/1702150551.html

Due to an issue in ext4 with data corruption in kernel 6.1.64-1, we are pausing the 12.3 image release for today while we attend to fixes. Please do not update any systems at this time, we urge caution for users with UnattendedUpgrades configured. Please see bug# 1057843: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1057843

 

ArsTechnica article on the letter. Just a short summary, with some more context on other works and investigations into the auto industry's privacy issues.

Does your company collect user data from its vehicles, including but not limited to the actions, behaviors, or personal information of any owner or user?
If so, please describe how your company uses data about owners and users collected from its vehicles. Please distinguish between data collected from users of your vehicles and data collected from those who sign up for additional services.
Please identify every source of data collection in your new model vehicles, including each type of sensor, interface, or point of collection from the individual and the purpose of that data collection.
Does your company collect more information than is needed to operate the vehicle and the services to which the individual consents?
Does your company collect information from passengers or people outside the vehicle? If so, what information and for what purposes?
Does your company sell, transfer, share, or otherwise derive commercial benefit from data collected from its vehicles to third parties? If so, how much did third parties pay your company in 2022 for that data?
Once your company collects this user data, does it perform any categorization or standardization procedures to group the data and make it readily accessible for third-party use?
Does your company use this user data, or data on the user acquired from other sources, to create user profiles of any sort?
How does your company store and transmit different types of data collected on the vehicle? Do your company’s vehicles include a cellular connection or Wi-Fi capabilities for transmitting data from the vehicle?
Does your company provide notice to vehicle owners or users of its data practices?
Does your company provide owners or users an opportunity to exercise consent with respect to data collection in its vehicles?
If so, please describe the process by which a user is able to exercise consent with respect to such data collection. If not, why not?
If users are provided with an opportunity to exercise consent to your company’s services, what percentage of users do so?
Do users lose any vehicle functionality by opting out of or refusing to opt in to data collection? If so, does the user lose access only to features that strictly require such data collection, or does your company disable features that could otherwise operate without that data collection?
Can all users, regardless of where they reside, request the deletion of their data? If so, please describe the process through which a user may delete their data. If not, why not?
Does your company take steps to anonymize user data when it is used for its own purposes, shared with service providers, or shared with non-service provider third parties? If so, please describe your company’s process for anonymizing user data, including any contractual restrictions on re-identification that your company imposes.
Does your company have any privacy standards or contractual restrictions for the third-party software it integrates into its vehicles, such as infotainment apps or operating systems? If so, please provide them. If not, why not?
Please describe your company’s security practices, data minimization procedures, and standards in the storage of user data.
Has your company suffered a leak, breach, or hack within the last ten years in which user data was compromised?
If so, please detail the event(s), including the nature of your company’s system that was exploited, the type and volume of data affected, and whether and how your company notified its impacted users.
Is all the personal data stored on your company’s vehicles encrypted? If not, what personal data is left open and unprotected? What steps can consumers take to limit this open storage of their personal information on their cars?
Has your company ever provided to law enforcement personal information collected by a vehicle?
If so, please identify the number and types of requests that law enforcement agencies have submitted and the number of times your company has complied with those requests.
Does your company provide that information only in response to a subpoena, warrant, or court order? If not, why not?
Does your company notify the vehicle owner when it complies with a request?

 

ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.

Using this tactic, the researchers showed that there are large amounts of privately identifiable information (PII) in OpenAI’s large language models. They also showed that, on a public version of ChatGPT, the chatbot spit out large passages of text scraped verbatim from other places on the internet.

“In total, 16.9 percent of generations we tested contained memorized PII,” they wrote, which included “identifying phone and fax numbers, email and physical addresses … social media handles, URLs, and names and birthdays.”

Edit: The full paper that's referenced in the article can be found here

 

Blur tools for Signal: if you take or edit photos of crowds or strangers with Signal, you can use our face blur tool to quickly hide people's biometric face data.

You can then export the photo from Signal if you want to post it publicly.

[–] btp@kbin.social 35 points 10 months ago

"References illicit drugs" lol