James Long James Long

Syracuse Startup Podcast - Episode 8

In this episode, I had the good fortune to speak with Kasey Almanzi of Bowers & Company. Kasey is a certified public accountant and a tax supervisor at Bowers & Company, which is a Central New York located accounting firm of approximately 90 accounting professionals…

Episode 8 is up!

In this episode, I had the good fortune to speak with Kasey Almanzi of Bowers & Company. Kasey is a certified public accountant and a tax supervisor at Bowers & Company, which is a Central New York located accounting firm of approximately 90 accounting professionals.

Kasey and I had a great conversation talking about some of the basic taxes issues to think about when starting a business or operating a small startup. I learned a lot myself, and anyone who is just getting started in building a business really needs to listen up here.
Note that there some minor audio quality issues with this recording, as we did the podcast via zoom, and for reasons I never could figure out, I had to use my phone, rather than a computer mic to get it to work. Such is life I suppose.

Anyway, thanks to Kasey for coming on the podcast. I really enjoyed it, and I appreciate the time and effort involved in providing this great tax information.

The podcast can be accessed on our website at https://www.long.law/syracuse-startup, on Spotify at https://open.spotify.com/episode/20xnYODHrReaOQyg5EdvD8?si=a1fab684d1c941a9 , Apple Music, Google, Stitcher or anywhere else you get your podcasts.

Read More
James Long James Long

Rundown of the Top 15 Cybersecurity Threats of 2019-2020

Every year the European Union Agency for Cybersecurity (ENISA) releases a series of reports itemizing the top cybersecurity threats of the past year. I asked my best people to gather the data and report back with a rundown of the biggest threats.

Every year the European Union Agency for Cybersecurity (ENISA) releases a series of reports itemizing the top cybersecurity threats of the past year. As part of a cybersecurity risk assessment, reports like these are invaluable because they tell us how best to allocate security resources.

In our office, I wanted to organize some of our thinking around the threats identified in the ENISA report.

Me: Hi. I need a rundown of the top threats in the most recent ENISA report, can you get that to me?
Jim: Sure!
Me: Yeah.
Jim: Okay.

46.5% of all malware in e-mail messages were found in the ‘.docx’ file type.
— ETL 2020: Malware
9ovdyhjr9yk21.png

I knew this rundown would be helpful to our clients, because the combination of increased capabilities among bad actors along with the coronavirus pandemic together brought significant changes in the cyber threat landscape over the past year. Unfortunately, our reporting on the issue took a little longer than usual as we worked very hard to get this information to you in a useful format for distribution.

What the hell’s a rundown?
— Jim

One of the biggest developments over the year was the marked increase in employees working from home all over the world . The ability to do this was a major component in our economy’s ability to persevere despite brick-and-mortar shutdowns. While working from home, cybersecurity specialists had to adapt existing defenses to new infrastructure, particularly where the entry points were employees’ home networks and devices. In other words, while cybersecurity frameworks, for the last decade, have been trying to increase management of employee devices, suddenly the world’s banks, insurance carriers, and industrial conglomerates were being run off of employees’ unprotected home networks.

Considering the popularity of Content Management Systems (CMS) among internet users, these systems are an attractive target for malicious actors.
— ETL 2020: Web-based Attacks

The landscape was changing quickly, so I needed to get this information out to our clients asap…

Jim: When did you need that rundown by?
Me: As soon as possible.
Jim: Okay.
Me: Just get it right.
Jim: Yeah. Gotcha. Of course. I’m gonna dive in. To the rundown. I’ll be exhausted ’cause it’s like a triathlon. [At door.] Do you want to close this? Close, or keep it?

ENISA recorded a 667% increase in phishing scams in only 1 month during the COVID-19 pandemic.
— ETL 2020: Phishing

Fortunately, ENISA provided a summary that would make this rundown easier to explain, with ten main trends in the threat landscape over the last year:

  1. The attack surface in cybersecurity continues to expand as we are entering a new phase of the digital transformation.

  2. There will be a new social and economic norm after the COVID-19 pandemic even more dependent on a secure and reliable cyberspace.

  3. The use of social media platforms in targeted attacks is a serious trend and reaches different domains and types of threats.

  4. Finely targeted and persistent attacks on high-value data (e.g. intellectual property and state secrets) are being meticulously planned and executed by state-sponsored actors.

  5. Massively distributed attacks with a short duration and wide impact are used with multiple objectives such as credential theft.

  6. The motivation behind the majority of cyber-attacks is still financial.

  7. Ransomware remains widespread with costly consequences to many organizations.

  8. Still many cybersecurity incidents go unnoticed or take a long time to be detected.

  9. With more security automation, organizations will be invest more in preparedness using Cyber Threat Intelligence as its main capability.

  10. The number of phishing victims continues to grow since it exploits the human dimension being the weakest link

Jim: Hey dude, you know what a “rundown” is?
Oscar: Use it in a sentence.
Jim: “Uh, can you get this rundown for me?” [impersonating me]
Oscar: Try another sentence.
Jim: “This rundown better be really good”?
Oscar: I don’t know but it sounds like the rundown is really important.
Jim: He asked me to do this rundown of the Enisa Top 15.
Oscar: Why don’t you just ask him–
Jim: No. I can’t. It was like, hours ago.
Oscar: What have you been doing?
Kevin: Try it in another sentence.

The threat landscape is becoming extremely difficult to map. Not only attackers are developing new techniques to evade security systems, but threats are growing in complexity and precision in targeted attacks.
— ENISA Threat Landscape Report 2020

Because of the increased complexity of cyber threats worldwide, assessments like these are becoming more and more important for businesses trying to address the most likely cyber threats. They are becoming more costly, and in some cases harder to prevent. So I wanted my best people to gather the data and report back with a rundown of the biggest threats.

Me: You started on that rundown yet? [Looks at Jim’s screen.]
Jim: Oh, this is just something I’m taking a break with.
Me: Oh.
Jim: I will get back to the rundown, uh, right now.
Me: Okay, great.
Jim: Hey you know what? Do you have a rundown that I could take a look at, just so I know what type of rundown you’re looking for ?
Me: Just keep it simple.
Jim: Keeping it simple -that’s what I’m doing. But I am working hard on this one. Real hard.
Me: You’re working hard? On this?
Jim: No. Not too hard. Not harder than I should.

After a short delay as we determined to best way to present this information, we were able to assemble ENISA’s Top 15 cybersecurity threats, with executive summaries for each topic. Enjoy!

46.5%_of all malware in e-mail messages found in ‘.docx’ file type
— ETL 2020: Malware

ENISA’s Top 15 Security Threats

  1. Malware

    Malware is a common type of cyber-attack in the form of malicious software.Families of malware include cryptominers, viruses, ransomware, worms and spyware. Its common objectives are information or identity theft, espionage and service disruption.

    During 2019, cryptominers were one of the most prevalent malware family in the threat landscape, resulting in high IT costs, increased electricity consumption and reduced employee productivity. Ransomware presented a slight increase in 2019 compared with 2018, though still remaining at the bottom of the malware type list.

    Web and e-mail protocols were the most common initial attack vectors used to spread malware. However, using brute force techniques or exploiting system vulnerabilities, certain malware families were able to spread even further inside a network. Although global detections of attacks have remained at the previous year’s levels, there was a noticeable shift from consumer to business targets.

  2. Web-based Attacks

    Web-based attacks are an attractive method by which threat actors can delude victims using web systems and services as the threat vector. This covers a vast attack surface, for instance facilitating malicious URLs or malicious scripts to direct the user or victim to the desired website or downloading malicious content (watering hole attacks1, drive-by attacks) and injecting malicious code into a legitimate but compromised website to steal information (i.e. formjacking) for financial gain, information stealing or even extortion via ransomware. In addition to these examples, internet browser exploits and content management system (CSM) compromises are important vectors observed by different research teams being used by malicious actors.

    Brute-force attacks, for example, operate by overwhelming a web application with username and password login attempts. Web-based attacks can affect the availability of web sites, applications and application programming interfaces (APIs), breaching the confidentiality and integrity of data.

  3. Phishing

    Phishing is the fraudulent attempt to steal user data such as login credentials, credit card information, or even money using social engineering techniques. This type of attack is usually launched through e-mail messages, appearing to be sent from a reputable source, with the intention of persuading the user to open a malicious attachment or follow a fraudulent URL. A targeted form of phishing called ‘spear phishing’ relies on upfront research on the victims so that the scam appears more authentic, thereby, making it one of the most successful types of attack on enterprises’ networks.

    An emotional response justifies many people actions when they are phished and is exactly what hackers are looking for. In a training context, that is what a phishing simulation should try to achieve. Training e-mail users is one of the often used measures for preventing phishing, but results are not convincing since threat actors are constantly changing their modus operandi. The domain-based message authentication, reporting, and conformance (DMARC) standard ensures that e-mail from fraudulent domains is blocked, diminishing the rate of success of phishing, spoofing and spam attacks.

    In the future, e-mail continues to be the number one mechanism for phishing but not for long. We are already seeing an increase in the use of social media messaging, WhatsApp and others to conduct attacks. The most relevant change will be in the methods used to send the messages, which will become more sophisticated with the adoption of adversarial Artificial Intelligence (AI) to prepare and send the messages. Phishing and spear phishing are major attack vectors of other threats such as unintentional insider threats

  4. Web application attacks

    Web applications and technologies have become a core part of the internet by adopting different uses and functionalities. The increase in the complexity of web application and their widespread services creates challenges in securing them against threats with diverse motivations from financial or reputational damage to the theft of critical or personal information.1Web services and applications depend mostly on databases to store or deliver the required information. SQL Injection (SQLi) type of attacks are a well-known example and the most common threats against to such services. Cross-site scripting (XSS) attacks are another example. In this type of attack, the malicious actor misuses weaknesses in forms or other input functionalities of web applications that leads to other malicious features such as being redirected to a malicious website.

    While organizations are becoming proficient and developing more consistent automation in their web application lifecycle, they are demanding security as the most crucial part of their offering and prioritization. This introduction of complex environments drives the adoption of new services such as Application Programming Interfaces (APIs). APIs, which create new challenges for web application security the organizations involved to consider more prevention and detection measures. For instance, roughly 80% of organizations adopting APIs deployed controls on their ingress traffic. In this section, we review the threat landscape of web applications during 2019.

  5. Spam

    The first spam message was sent in 1978 by a marketing manager to 393 people via ARPANET. It was an advertising campaign for a new product from the company he worked for, the Digital Equipment Corporation. For those first 393 spammed people it was as annoying as it would be today, regardless of the novelty of the idea. Receiving spam is an inconvenience, but it may also create an opportunity for a malicious actor to steal personal information or install malware. Spam consists of sending unsolicited messages in bulk. It is considered a cybersecurity threat when used as an attack vector to distribute or enable other threats.

    Another noteworthy aspect is how spam may sometimes be confused or misclassified as a phishing campaign. The main difference between the two is the fact that phishing is a targeted action using social engineering tactics, actively aiming to steal users’ data. In contrast spam is a tactic for sending unsolicited e-mails to a bulk list. Phishing campaigns can use spam tactics to distribute messages while spam can link the user to a compromised website to install malware and steal personal data.

    Spam campaigns, during these last 41 years have taken advantage of many popular global social and sports events such as UEFA Europa League Final, US Open, among others. Even so, nothing compared with the spam activity seen this year with the COVID-19 pandemic.

  6. Denial of service

    Distributed Denial of Service (DDoS) attacks are known to occur when users of a system or service are not able to access the relevant information, services or other resources. This stage can be accomplished by exhausting the service or overloading the component of the network infrastructure.1Malicious actors increased the number of attacks by targeting more sectors with different motives. While defense mechanisms and strategies are becoming more robust, malicious actors are also advancing their technical skills. Reports suggest that the usage of reflected and amplified attack techniques facilitating new vectors other than the commonly known ones (UDP amplification etc.) has increased. Malicious actors are also improving their commercial tactics by starting to advertise their services on the web. Historically, DDoS services were advertised in the dark web forums, but now they use common social media channels such as YouTube and Reddit to promote their services.

    In 2019, we saw new entries in the top 10 list of source countries generating DDoS traffic (Hong Kong, South Africa, etc.). It was also the year that saw an increase in DDoS activity by botnets. IoT devices are a ‘hotbed’ for DDoS botnets, and China (24%), Brazil (9%) and Iran (6%) were considered as the countries most infected with botnet agents.3A security researcher predicted that, the implementation and distribution of 5G networks will exponentially increase the number of connected devices, hence the expansion of botnet networks.

    Although DoS attacks are not new to cybersecurity and network defenders, their level of sophistication is increasing, and malicious actors are observed to be actively running more reconnaissance activities than before.

  7. Identity theft

    Identity theft or identify fraud is the illicit use of a victim’s personal identifiable information (PII) by an impostor to impersonate that person and gain a financial advantage and other benefits.

    According to an annual security report, at least 900 international cases of identity theft or identity-related crimes were detected. The most significant incidents reported were:

    • the exposure of nearly 106 million American and Canadian bank customers’ personal information from the Capital One data breach incident in March 2019;

    • the exposure of 170 million usernames and passwords used by digital game developer Zynga in September 2019;

    • the stealing of 20 million accounts from the British audio streaming service Mixcloud;

    • the compromise of 600,000 drivers and 57 million users personal information from Uber’s data breach incident in November 2019;

    • and the theft of 9 million personal records from EasyJet customers including identity cards and credit cards.

    The trend of identity theft is reflected to a great part in data breaches, which, compared with 2018, saw a record number of 3.800 publicly disclosed cases, 4,1 billion records exposed and an increase of 54% in the number of breaches reported.

  8. Data breaches

    A data breach is a type of cybersecurity incident in which information (or part of an information system) is accessed without the right authorization, typically with malicious intent, leading to the potential loss or misuse of that information. It also includes ‘human error’ that often happens during the configuration and deployment of certain services and systems, and may result in unintentional exposure of data.

    In many cases, companies or organizations are not aware of a data breach happening in their environment because of the sophistication of the attack and sometimes the lack of visibility and classification in their information system. Based on research, it takes approximately 206 days to identify a data breach in an organization. Thus, the time to contain, remediate and recover the data means that it takes longer to return to normal.

    Despite all the risks involved, organizations keep even more data4using cloud storage infrastructures and complex on-premises environments. These environments are gradually more exposed to new and different risks, proportional to the sensitiveness of the information stored. It comes as no surprise that, the number of data breaches increased in 2019 and 2020. New findings also suggest that the impact is not felt exclusively when a data breach is discovered -the financial impact can remain for more than 2 years after the initial incident.

  9. Insider threat

    An insider threat is an action that may result in an incident, performed by someone or a group of people affiliated with or working for the potential victim. There are several patterns associated with threats from the inside. A well-known insider threat pattern (also known as ‘privilege misuse’) occurs when outsiders collaborate with internal actors to gain unapproved access to assets. Insiders may cause harm unintentionally through carelessness or because of a lack of knowledge. Since these insiders often enjoy trust and privileges, as well as knowledge of the organizational policies, processes and procedures of the organization, it is difficult to distinguish between legitimate, malicious and erroneous access to applications, data and systems.

    The five types of insider threat can be defined according to their rationales and objectives:

    • the careless workers who mishandle data, break use policies and install unauthorized applications;

    • the inside agents who steal information on behalf of outsiders;

    • the disgruntled employees who seek to harm their organization;

    • the malicious insiders who use existing privileges to steal information for personal gain;

    • the feckless third-parties who compromise security through intelligence, misuse or malicious access to or use of an asset.

    All five types of insider threats should be continuously studied, as acknowledging their existence and their modus operandi should define the organization’s strategy for security and data protection.

  10. Botnets

    A botnet is a network of connected devices infected by bot malware. These devices are typically used by malicious actors to conduct Distributed Denial of Service (DDoS) attacks. Operating in a peer-to-peer (P2P) mode or from a Command and Control (C2) center, botnets are remotely controlled by a malicious actor to operate in a synchronized way to obtain a certain result.

    Technological advancements in distributed computing and automation have created an opportunity for malicious actors to explore new techniques and improve their tools and attack methods. Thanks to this, botnets operate in much more distributed and automated ways and are available from self-service and ready-to-use providers.

    Malicious bots, referred as ‘bad bots’, are not only constantly evolving but people’s skill sets and the bots’ level of development are becoming highly specialized in certain applications, such as defense-providers or even evasions techniques. From a different perspective, botnets provide a vector for cyber-criminals to launch various operations from e-banking fraud to ransomware, mining cryptocurrencies and DDoS attacks.

  11. Physical manipulation, damage, theft and loss

    Physical tampering, damage, theft and loss has drastically changed in the past few years. The integrity of devices is vital for technology to become mobile and for most implementations of the Internet of Things (IoT). IoT can enhance physical security with more advanced and complex solutions. This way, IP security-based systems with smart sensors, Wi-Fi cameras, smart security lighting, drones and electronic locks can provide surveillance data that are evaluated by Artificial Intelligence (AI) and Machine Learning (ML) mechanisms to identify threats and respond with minimum delay and maximum accuracy.2However, intelligent buildings, mobile devices and smart wearables can be exploited to bypass physical security measures.

    In 2019, ATM and POS related physical attacks continued in Europe and worldwide, but the resulting losses were lower than the average over the past decade. The good news is that the companies, IT managers and decision makers are leaning towards hybrid cyber and physical security plans, although in the past physical security was not a priority.

  12. Information leakage

    A data breach occurs when data, for which an organization is responsible, is subject to a security incident resulting in a breach of confidentiality, availability or integrity.1A data breach frequently causes an information leakage, which is one of the major cyber threats, covering a wide variety of compromised information from personal identifiable information (PII), financial data stored in IT infrastructures to personal health information (PHI) kept in healthcare providers’ repositories.

    When security breaches are encountered in the headlines of bulletins, blogs, newspapers, and technical reports, the focus is mostly either on adversaries or on the catastrophic failure of the cyber-defense processes and techniques. Nevertheless, the indisputable truth is that, despite the impact or scope of such an event, the breach is usually caused by an individual’s action or by an organizational process failure.

  13. Ransomware

    Ransomware has become a popular weapon in the hands of malicious actors who try to harm governments, businesses and individuals on a daily basis. In such cases, the ransomware victim may suffer economic losses either by paying the ransom demanded or by paying the cost of recovering from the loss, if they do not comply with the attacker’s demands. In an incident in 2019, Baltimore, Maryland suffered a lockout and recovery is expected to pay US $18.2 million (ca. €15,4 million), although the city refused to pay the ransom. With the growing number of incidents growing, it is evident that becoming a victim is not an ‘if’ but rather a ‘when’ hypothesis. However, in the majority of countries’ fights against ransomware, several challenges need to be addressed, such as the lack of coordination and collaboration between agencies and authorities, and the lack of legislation, that clearly criminalizes ransomware attacks.

    Although cyber insurance policies exist since early 2000, ransomware attacks are one of the main reasons for the increased interest in this type of insurance during the last 5 years. In some of the 2019 incidents, the ransom or the costs of recovery was covered by such contracts. Unfortunately, if potential ransomware targets are known to be insured, the attackers assume that they will most probably be paid. Another downside for the victim is that insurance providers are paying the ransom in advance to mitigate the damage and to keep the victim’s reputation intact. However, such compliance by paying ransoms encourages the hacker community and ensures neither the victim’s recovery nor their reputation.

  14. Cyberespionage

    Cyber espionage is considered both a threat and a motive in the cybersecurity playbook. It is defined as the use of computer networks to gain illicit access to confidential information, typically that held by a government or other organization.

    In 2019, many reports revealed that global organizations consider cyber espionage (or nation-state-sponsored espionage) a growing threat affecting industrial sectors, as well as critical and strategic infrastructures across the world, including government ministries, railways, telecommunication providers, energy companies, hospitals and banks. Cyber espionage focuses on driving geopolitics, and on stealing state and trade secrets, intellectual property rights and proprietary information in strategic fields. It also mobilizes actors from the economy, industry and foreign intelligence services, as well as actors who work on their behalf. In a recent report, threat intelligence analysts were not surprised to learn that 71% of organizations are treating cyber espionage and other threats as a ‘black box’ and are still learning about them.

    In 2019, the number of nation-state-sponsored cyber-attacks targeting the economy increased and it is likely to continue this way. In detail, nation-state-sponsored and other adversary-driven attacks on the Industrial Internet of Things (IIoT) are increasing in the utilities, oil and natural gas (ONG), and manufacturing sectors. Furthermore, cyber-attacks conducted by advanced persistent threat (APT) groups indicate that financial attacks are often motivated by espionage. Using tactics, techniques and procedures (TTPs) akin to those of their espionage counterparts, groups such as the Cobalt Group, Carbanak and FIN7 have allegedly been targeting large financial institutions and restaurant chains successfully.

    The European Parliament’s Committee of Foreign Affairs called upon Member States to establish a cyber-defense unit and to work together on their common defense. It stated that ‘the Union’s strategic environment has been deteriorating ... in order to face the multiple challenges that directly or indirectly affect the security of its Member States and its citizens; whereas issues that affect the security of EU citizens include: armed conflicts immediately to the east and south of the European continent and fragile states; terrorism –and in particular Jihadism –, cyber-attacks and disinformation campaigns; foreign interference in European political and electoral processes’.

    Threat actors motivated by financial, political, or ideological gain will increasingly focus attacks on supplier networks with weak cybersecurity programs. Cyber espionage adversaries have slowly shifted their attack patterns to exploiting third-and fourth-party supply chain partners.

  15. Crytojacking

    Cryptojacking(also known as cryptomining) is the unauthorized use of a device’s resources to mine cryptocurrencies. Targets include any connected device, such as computers and mobile phones; however, cybercriminals have been increasingly targeting cloud infrastructures. This type of attack has not attracted much attention from law enforcement agencies and its abuse is rarely reported, mainly because of its relatively few negative consequences. Nevertheless, organizations may notice higher IT costs, degraded computer components, increased electricity consumption and reduced employee productivity caused by slower workstations.

f87be63b3fd68c515e84d5f29a395ead.jpg

Jim: There’s the rundown you asked for. I may have expanded some areas that you weren’t prepared for.
Charles: Great. Fax that to everyone on the distribution list.
Jim: Yeah sure. You want to look at it first?
Charles: Do I need to?
Jim: No. No, I just wanted to make sure, it was in the same format. So that distribution list is gonna be my…?
Charles: What’s that?
Jim: The one I have. I’ll use the one I have.

Thank you for your patience as we fax this important rundown out to our distribution list.

Read More
James Long James Long

Do the deceased have data rights?

When we think about the data privacy rights of people, there tends to be a natural assumption that those people are living. And that’s probably fair. After all, data privacy rights are still in their infancy in the grand scheme of things and there has been no real history of estates suing providers for privacy violations related to a deceased person…

When we think about the data privacy rights of people, there tends to be a natural assumption that those people are living. And that’s probably fair. After all, data privacy rights are still in their infancy in the grand scheme of things and there has been no real history of estates suing providers for privacy violations related to a deceased person. But it got me thinking, what data privacy rights, if any, apply to data pertaining to deceased people? Society has always afforded the dead some rights, which may not be immediately obvious. In the brave new world of data privacy, the answer is a little trickier, and it depends on which laws we’re talking about.

Most of the time, when a person dies, their account becomes inactive. (Although, there have been some interesting exceptions in recent years). Facebook has an estimated 10 million to 30 million deceased users which is likely around 1% of its accounts. (Meanwhile, most of us would be glad to have that many visitors in total). And that doesn’t include the various accounts made for George Washington and similar public figures that pre-deceased Facebook. In an article from Time, they estimate that eventually deceased Facebook users will outnumber living ones, sometime in the next 50 years. And the issue is beginning to gain some scholarly attention in terms of what to do about it.

Let’s start with the low hanging fruit. This is a rare instance where the GDPR is more illuminating than our domestic legislation. The GDPR Citiation (27) states;
"(27) This Regulation does not apply to the personal data of deceased persons. Member States may provide for rules regarding the processing of personal data of deceased persons."

And also mentioned again it in GDPR Citation (158);
"(158) Where personal data are processed for archiving purposes, this Regulation should also apply to that processing, bearing in mind that this Regulation should not apply to deceased persons. Public authorities or public or private bodies that hold records of public interest should be services which, pursuant to Union or Member State law, have a legal obligation to acquire, preserve, appraise, arrange, describe, communicate, promote, disseminate and provide access to records of enduring value for general public interest. Member States should also be authorised to provide for the further processing of personal data for archiving purposes, for example with a view to providing specific information related to the political behavior under former totalitarian state regimes, genocide, crimes against humanity, in particular the Holocaust, or war crimes."

So, that was simple enough, the GDPR does not protect deceased peoples’ data, but what about here in the U.S.?

The HIPAA Privacy rule protects medical information of a deceased person for 50 years after the person’s death. But HIPAA applies mainly to medical information, and does not protect much of the financial information that cybercriminals go looking for. And other U.S. privacy statutes generally are not as explicit.

For instance, under the CCPA, a protected “consumer” is defined as "a natural person who is a California resident." Further, the California Code of Regulations defines a resident as "(1) every individual who is in the State for other than a temporary or transitory purpose, and (2) every individual who is domiciled in the State who is outside the State for a temporary or transitory purpose.” Where does that leave us? While there is no known instance of a CCPA enforcement arising out of a deceased person’s data, is a deceased person a “resident” of California? If we are being really technical, does it matter whether the individual is buried or cremated? Arguably, if they are buried, they would continue to be an individual who is in the state, right? Are they still an individual if they are cremated? These questions are somewhat macabre, and they just beg further questions.

The privacy laws of other states are no more illuminating. In New York, the SHIELD Act protects “persons.” However, there is not widespread agreement of who (or what) qualifies as a person. See Matter of Nonhuman Rights Project, Inc. v. Lavery, 2018 NY Slip Op 03309, 31 N.Y.3d 1054 (2018).

Ultimately, if this issue ever comes up, it is likely to be a question for the courts. Especially in California, it seems inevitable that an estate may eventually sue for CCPA violations. Until then, we can only speculate.

Read More
James Long James Long

7 Steps to CAN-SPAM Act Compliance

We’ve all gotten those emails. You know, the ones riddled with typos, trying to get you click on something, maybe with suggestive themes or images? And most savvy business people have attained enough tech-competency to know not to click on that stuff. We call it…. SPAM!

We’ve all gotten those emails. You know, the ones riddled with typos, trying to get you click on something, maybe with suggestive themes or images? And most savvy business people have attained enough tech-competency to know not to click on that stuff. We call it…. SPAM!

But that’s not our marketing emails, right? Our marketing emails are polite, professional, and 100% above board. Right? Well….it depends.

Turns out, there is a federal law, the CAN-SPAM Act that says your email marketing emails must meet certain guidelines. Otherwise, it’s Spam, and could be subject to a fine of up to $43,792 (how did they come up with that number??).

The Federal Trade Commission offers 7 tips for following the CAN-SPAM Act. For the most part, it is pretty straightforward, and I’ve reprinted them, verbatim, below. If you have any questions about whether your marketing emails meet these criteria, we’re happy to help.

These guidelines apply to emails whose primarily purpose is to advertise or promote a commercial product or service, including content on a website operated for a commercial purpose.

Without further ado, straight from FTC:

  1. Don’t use false or misleading header information. Your “From,” “To,” “Reply-To,” and routing information – including the originating domain name and email address – must be accurate and identify the person or business who initiated the message.

  2. Don’t use deceptive subject lines. The subject line must accurately reflect the content of the message.

  3. Identify the message as an ad. The law gives you a lot of leeway in how to do this, but you must disclose clearly and conspicuously that your message is an advertisement.

  4. Tell recipients where you’re located. Your message must include your valid physical postal address. This can be your current street address, a post office box you’ve registered with the U.S. Postal Service, or a private mailbox you’ve registered with a commercial mail receiving agency established under Postal Service regulations.

  5. Tell recipients how to opt out of receiving future email from you. Your message must include a clear and conspicuous explanation of how the recipient can opt out of getting email from you in the future. Craft the notice in a way that’s easy for an ordinary person to recognize, read, and understand. Creative use of type size, color, and location can improve clarity. Give a return email address or another easy Internet-based way to allow people to communicate their choice to you. You may create a menu to allow a recipient to opt out of certain types of messages, but you must include the option to stop all commercial messages from you. Make sure your spam filter doesn’t block these opt-out requests.

  6. Honor opt-out requests promptly. Any opt-out mechanism you offer must be able to process opt-out requests for at least 30 days after you send your message. You must honor a recipient’s opt-out request within 10 business days. You can’t charge a fee, require the recipient to give you any personally identifying information beyond an email address, or make the recipient take any step other than sending a reply email or visiting a single page on an Internet website as a condition for honoring an opt-out request. Once people have told you they don’t want to receive more messages from you, you can’t sell or transfer their email addresses, even in the form of a mailing list. The only exception is that you may transfer the addresses to a company you’ve hired to help you comply with the CAN-SPAM Act.

  7. Monitor what others are doing on your behalf. The law makes clear that even if you hire another company to handle your email marketing, you can’t contract away your legal responsibility to comply with the law. Both the company whose product is promoted in the message and the company that actually sends the message may be held legally responsible.

If you’d like to book a free 30 minute consultation to discuss CAN-SPAM Act compliance, or any other tech law issue, you can book a free virtual consultation with an attorney here. It’s quick and easy!

Read More
James Long James Long

Is Federal Data Privacy Legislation On The Way?

All 50 states in the U.S. now have breech notification laws. Many are similar, but some are unique. Places like California, Illinois, New York and Massachusetts have been relatively aggressive in developing a set of regulations to protect their residents from data privacy shenanigans as well as the effects of cybersecurity incidents…

All 50 states in the U.S. now have breech notification laws. Many are similar, but some are unique. Places like California, Illinois, New York and Massachusetts have been relatively aggressive in developing a set of regulations to protect their residents from data privacy shenanigans as well as the effects of cybersecurity incidents. Others—I’m looking at you South Dakota—have not.

But we are reaching a critical point in the development of data privacy and cybersecurity law that compliance with the laws of every state are getting to be more and more challenging. The majority of businesses simply throw up their hands, knowing they should do “something”, but they are not really sure what that “something” should be.

More and more, we’re hearing calls for a single, unifying privacy law. One statute to rule them all. Of course, we must be careful what we wish for. If the federal law preempts legislation like the CCPA, or topples the Illinois Biometric Act’s private right to sue, many businesses may welcome the change. Yet, the law could turn the other way instead, opening up data breech litigation in federal courts across the country.

In 2019, Sen. Ed Markey, D-Mass., introduced the Privacy Bill of Rights Act which was followed by the United States Consumer Data Privacy Act. The bills began debate on the issue, but ultimately did not pass.

Realizing that the legislation was bound to fail without a catchy acronym, Sen. Roger Wicker, R-Miss., proposed the Setting an American Framework to Ensure Data Access, Transparency, and Accountability Act (“SAFE DATA Act”) in September 2020. The SAFE DATA Act in its current form proposes complete state preemption (Sec. 405(a)), thus gutting the CCPA, NY SHIELD Act and Illinois Biometric Act. Further, many of its requirements would not apply to small and mid-sized business with less than 500 employees, less than $50 million in annual revenue, and who do not collect or process the personal data of fewer than 1 million individuals. (Sec. 2(12)). Last, there is no mention of a private right of action. However, State Attorneys General would be empowered to bring suit under the Act. (Sec. 402(a)).

My sense is that there is not yet enough consensus on certain thorny issues, like the private right of action, state preemption and the scope of applicability, for this bill to pass, but it’s a starting point. Further, it seems that federal action, one way or another, is picking up steam, with the likely result being some action, even if half-hearted.

We’ll stay tuned and see what comes of it.

Read More
James Long James Long

New York State Assembly Proposes “Biometric Privacy Act”

On January 6, 2021, a bipartisan group of 24 legislators proposed Assembly Bill 27, known as the “New York Biometric Privacy Act.” The Bill is essentially the same as the Illinois Biometric Information Privacy Act, which is considered the vanguard of legislation protecting citizen’s biometric data. While well meaning, such a law in New York would create significant challenges for entities doing business in New York.

In case you need a reminder, biometric data is any aspect of your person that can used to identify you. For example, fingerprints, retina scans, DNA, or even your face. The legislation in Illinois has already generated a cottage industry for privacy-related class action lawsuits in Illinois and could mean billions for the plaintiffs’ bar if enacted in New York.

Notably, New York introduced some privacy protections for biometric data when it passed the New York SHIELD Act, less than two years ago. However, the SHIELD specifically denied a private right of action of affected citizens. Instead, it expanded the definition of a data breach to include unauthorized access (as opposed to unauthorized acquisition) of protected data, defined specific actions to be taken when a data breach occurs, and created affirmative requirements to reasonably safeguard private information (including biometric data).

New York is unique in the way its “reasonable safeguards” requirement is applied. For example, in California, such affirmative requirements only apply to entities with more $25 million in gross revenue, or who are in the business of buying and selling consumer data. New York created an affirmative duty for all entities, regardless of size and location, who store New York residents’ private information. That said, the SHIELD creates enumerated requirements for mid-to-large sized businesses, while still demanding “reasonable” safeguards even for small businesses.

Assembly Bill 27 now proposes that entities in possession of certain biometric data should be required to develop and comply with a written policy establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within three years of the individual's last interaction with the private entity, whichever occurs first. Such entities would be required to notify the individuals from whom they intend to collect the information about the specific purpose and length of time the data will be collected, stored, and used prior to its collection. Last, the entities would have to obtain a written release prior to collection, which is likely to take the form of fine-print in a click-thru agreement that no consumer will actually read.

These requirements may be onerous for small businesses, but in many cases are not necessarily unreasonable. However, more troubling, the New York proposal would include a private right of action in the case of violations of the law. As any attorney who spent any time litigating mass torts and class actions will tell you (wink wink), once the floodgates for plaintiffs’ attorneys open up, a litigation industry is inevitable and good companies, along with the bad actors, will get swept up in the lawsuits.

Illinois has shown us how even harmless errors can have big consequences. For example, in Rosenbach v. Six Flags Entertainment Corp., 2019 IL 123186 (Ill. Jan. 25, 2019), the Illinois Supreme Court found that a plaintiff need not show actual damages to proceed with a lawsuit under its biometric privacy act. A similar ruling in New York could spell disaster for a lot of businesses who are already struggling to comply with the SHIELD act. In Illinois, it took several years to see a rise in class actions as a result of the law, but by 2019, 161 new class actions were counted within a six month period from January to June.

So far, the bill has been referred to the consumer affairs and protection committee. Given its bipartisan sponsorship, some version of this bill has a good chance at passage. We will continue to track developments and keep you aprised. In the meantime, The Long Law Firm will work towards developing effective and affordable solutions for small to mid-size businesses to comply with all new and existing privacy and cybersecurity laws. Stay tuned! Feel free to shoot me an email with any questions, concerns, or news that you have.

Read More
James Long James Long

Data Mapping: It’s A Spreadsheet.

1709196.png

Re-posted from intothecyberbreach.com, originally published on June 26, 2020.

There are few things that niche industries love more than developing their own lingo. Those of us old enough to remember the tech boom of the late-90s/early 2000s probably also remember hearing that everything was a “paradigm-shift.” Eventually, this phrase morphed into everything being a “gamechanger.” Today, business people love to talk about whether X “moves the needle” and it wasn’t so long ago that before elaborating on any topic, we would first announce that we are going to “add some color” to the issue. My suspicion has always been that this is a form of code-switching, designed to let the listener know that the speaker is with the “in” crowd. It comes from a place of insecurity, worst-case, or conformity, best-case. And look, I’m guilty of it too, which is why there is so much to unpack here (see what I just did?).

The cybersecurity world is no different, and in a lot of ways, worse, when it comes to having its own language. You would think that the best way to communicate already very complex ideas would be to simplify the language so that everyone could understand it. But, its pretty clear that a lot of people don’t want the language to be simple. They want it to be confusing, so they appear knowledgeable.

One of my goals with this blog is to break down intimidating cybersecurity concepts into plain language. Today’s lesson? The Data Map. Guess what. It’s a spreadsheet. Let’s take a look at what a data map looks like, which, hopefully will make more clear why they are important.

The data map really is the roadmap to your work in managing your cyber risk and the rosetta stone to responding to incidents involving your data. It is the product of all of the preparation and planning work you put in ahead of time, so that when an incident does occur (sorry, but chances are, it will eventually), you will have a game plan (a map, if you will) or how to proceed. That said, it is not, itself, the incident response plan (that is a different thing, which I will eventually cover in the future).

The Data Maps that I typically use includes the following fields, give or take:

  • Data Description

  • Category

  • Source

  • Metadata

  • Purpose

  • Lawful Reason

  • Handling

  • Disposal

  • Justification Inquiry

  • Who Has Access

  • Who is Responsible

  • What Laws Implicated

  • Risks

  • Compliance Notes

There you have it. The secret sauce. I’m not worried though, because you are still going to want a professional who can help guide you through this process.

Unfortunately, its not the fields that make your map useful, its the data that you put in it. One of the things that a good data map will do, is expand a team’s thinking about what data actually is. And one way this is accomplished is by categorizing your data. Is this data a record of contact information? login information? social security numbers? client info or employee info?

The more you start to think about how to describe and categorize your data, the more areas of your business will reveal themselves as important sources of data. For instance, you may start to realize that much of this data is located in your email server. Some of it is on employee devices, company laptops, usb’s on your desk, CDs in the file cabinet, etc., etc.

Ultimately, a lot of questions in your data map are about the process you undergo in answering them. Fact is, the actual answers are always changing. But it is the exercise of thinking about risk, and thinking about where data resides, and who has access to it, that is pivotal to what a data map does for your team. It also creates jumping off points for further inquiry.

So, how does this help us when a breach has occurred?

Glad you asked. Here is a thought experiment. Say you have a situation where you are contacted and told that three of your customers have reported recent suspicious activity on their credit cards. Visa thinks you are the source of a data breach. Step one is to investigate and stop the bleeding, right? Where do you begin?

If you’ve done a good job of working through these issues ahead of time, you can review your data map and clearly see 1) areas where this data is stored; 2) key personnel who are responsible for this data that you will want to call on to address the situation; 3) further information about the scope of the data you are storing; and 4) areas of concern where a breach in one place could signify other, as yet undiscovered, breaches on the system (particularly where you have multiple machines or are using SaaS vendors. The possibilities are endless in terms of creating shortcuts for incident responses.

In addition, you would use your data map to take proactive steps to secure this data. You identify weaknesses, areas that can be improved, people to bring into various efforts, etc. etc.,

By completing this work ahead of time, you are getting a head start at a time when a few minutes can literally cost you millions of dollars. Is that important enough for you?

So, now that you know what a data map is, and why they are important, what are you going to do to ensure you can leverage their usefulness?

Read More
James Long James Long

Syracuse Startup Podcast

SyracuseStartup-logo.png

Re-posted from intothecyberbreach.com, originally published on May 26, 2020.

Hi Everybody. I launched a podcast called “Syracuse Startup” which interviews entrepreneurs in Syracuse to share insights, challenges, victories, etc. in our chosen fields. My first guest is a friend of mine, Eric Maley, of Upstate Agents, talking about how he got into real estate. Look out for new episodes approximately once a month. Please consider subscribing and leaving positive reviews.

If you are an entrepreneur in Syracuse and would like to be on the show, give me a shout. Techies are somewhat preferred, but its not a requirement.

Here is a link:

Read More
James Long James Long

Is Personal Information About My Adversary Protected Too?

Re-posted from intothecyberbreach.com, originally published on May 25, 2020.

One of the major challenges for lawyers in the digital age is keeping case data out of the wrong hands. It is a staple of law practice and has been so since the dawn of the profession . Even before anyone ever heard of cybersecurity, Model 1.6 of the Rules of Professional Conduct required that “A lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.”

I wrote about this problem recently when discussing a data breach that affected a law firm’s famous clients and their sensitive information. The problem of how to avoid data breach when YOUR data isn’t really YOURS, is an important one. Failing to get it right can mean malpractice, loss of clients, and possible disciplinary action.

Yet, there is another issue that gets far less attention:

What about my adversary’s private information?

For example, its quite common to request social security numbers from an adversary in personal injury matters. There are good reasons for this, mostly having to do with verifying an individual’s identify, authenticating medical records, and complying with various federal reporting requirements when settling personal injury matters. However, through the lenses of the NY SHIELD Act, this gets to be a tricky issue.

Under the SHIELD Act, a person’s name and social security number, when kept together, is considered to be “private information.” This definition is important because the Act goes on to require that:

“Any person or business that owns or licenses computerized data which includes private information of a resident of New York shall develop, implement and maintain reasonable safeguards to protect the security, confidentiality and integrity of the private information including, but not limited to, disposal of data.”

So, taking a step back, this means that any law firm (law firms are businesses), that owns computerized data (in your phone, laptop, cloud, etc.) which includes private information (e.g., a word document with someone’s name and social security number on it), shall develop, implement and maintain reasonable safeguards to protect the security, confidentiality and integrity of the private information…”

So, what are “reasonable” safeguards??? Ah, the million dollar question (perhaps literally).

Here, the SHIELD Act defines what it considers reasonable, but creates a caveat that “small businesses” need only to maintain “reasonable administrative, technical and physical safeguards that are appropriate for the size and complexity of the small business, the nature and scope of the small business’s activities, and the sensitivity of the personal information the small business collects from or about consumers.”

For all others (those with at least: 50 employees, $3 million in annual revenue AND $5 million in assets), reasonable administrative safeguards are met by:

  • Designating one or more employees to coordinate the security program

  • Identifying reasonably foreseeable internal and external risks

  • Assessing the sufficiency of safeguards in place to control the identified risk

  • Training and managing employees in the security program practices and procedures

  • Verifying that the selection of service providers can maintain appropriate safeguards and requiring those safeguards by contract

  • Adjusting the security program in light of business changes or new circumstances

Reasonable technical safeguards are met by:

  • Assessing risks in network and software design

  • Assessing risks in information processing, transmission, and storage

  • Detecting, preventing, and responding to attacks or system failures

  • Regularly testing and monitoring the effectiveness of key controls, systems, and procedures

And reasonable physical safeguards are met by:

  • Assessing risks of information storage and disposal

  • Detecting, preventing, and responding to intrusions

  • Protecting against unauthorized access to or use of private information during or after the collection, transportation, and destruction or disposal of the information

  • Disposing of private information within a reasonable amount of time after it is no longer needed for business purposes by erasing electronic media so that the information cannot be read or reconstructed

In other words, mid-sided businesses and up must implement a bona fide data security program.

Small businesses are left to determine for themselves what is “reasonable” (until we see some caselaw on the subject). However, we can be certain that doing nothing is not going to cut it. Further, reasonableness for small business is going to be some sub-set of the enumerated safeguards mentioned above and will adjust depending on the context.

Herein lies one of the problems. There are few professions with more sensitive data than a law firm. So, what can we do about it?

There are a number of simple steps that go a lot way towards being reasonable here, especially when it comes to social security numbers. Here are a few ideas, as a starting point:

  1. Don’t collect the sensitive information until you actually need it.

  2. Dispose of the sensitive information quickly when you no longer need it.

  3. Do not store the sensitive information anywhere that is not encrypted and not protected with multi-factor authentication. (i.e., dont keep it on your harddrive if your harddrive isn’t appropriately protected)

  4. Do not email the sensitive information in an attachment.

  5. Practice “least privilege” by not allowing all users to access sensitive information.

Look, will these things, alone, constitute “reasonable safeguards” for a small business under NY SHIELD? I don’t know. No one does, yet. But, I can promise you that it is an excellent start while you are working on implementing your data security program.

Read More
James Long James Long

Vendor Management.

Re-posted from intothecyberbreach.com, originally published on May 20, 2020.

I just completed a whirlwind, virtual tour of New York bar associations (and boy are my arms tired… ) to teach CLEs regarding New York’s new SHIELD. (Thanks to the New York State Bar Association, the Broome County Bar Association, the Tompkins County Bar Association and the Onondaga County Bar Association!) One of the issues that comes up in these presentations is the topic of vendor management.

It’s not an easy issue, and here’s why… If you are a billion-dollar company, chances are you have enough leverage to have an arms-length negotiation with many of your vendors. You can explicitly require that they take certain steps to protect you both. Failure to comply is breach of contract, and is actionable. But the SHIELD Act is unique among state cybersecurity laws in that it requires businesses of all sizes to take proactive steps towards assuring “reasonable safeguards” of personal information. For contrast, in California, these proactive security requirements only apply to company who are taking in at least $25 million in annual revenue (or who are in the business of trading big data).

Meaning, if you are doing business in NY and you’ve got a five or six-figure annual revenue, you are in the uncomfortable position of having to vet your vendors to ensure the data you are sending them is being secured, while having no leverage to force most vendors to do so. An example of the type of vendor you might be using where this issue arises would be a credit card processor, who most certainly has your customers’ personal information. For an illustration of this problem, try calling up Google and tell them that you would like to negotiate the terms of your user agreement for gmail. Good luck.

There are a couple of approaches we can envision to address this issue. First, we can realize that these SHIELD Act requirements are for you to take “reasonable” safeguards. Small businesses are not required to take heroic measures to safeguard their information. We should not be bankrupting ourselves to accomplish data security. Second, we can dive into the vendor management process a little and identify areas where perhaps a small business can maneuver. Enter, the vendor questionnaire. Obviously, to fully execute on a program like this, you might want to retain an attorney who knows about this stuff.

The Vendor Questionnaire

The Vendor Questionnaire is rapidly becoming one of the primary means by which we can perform due diligence on our vendors. There are pluses and minuses. The more involved your vetting process, the more costly it becomes to retain vendors. Further, you are relying on their word. Be sure to follow up when you can. Ask for proof. Ask for referrals.

Not everyone is going to be willing to complete such a questionnaire, but the good news is that questionnaires have gained in popularity to an extent that they are becoming standardized. If questionnaires are standardized, there is a good chance that even larger potential vendors may be willing to share them with small firms.

For a review of standardized questions (which have the added benefit of being more cost-effective for small businesses), take a look at the CIS Top 20. This is a great starting point for the types of questions you should be asking of potential vendors. You can also look at NIST, SIG, and VSA. Some of these organizations even offer free questionnaires that you can use yourself. Of course, if you aren’t in a position to evaluate the answers, they may be difficult to use, but with more vendors addressing the same questionnaires, it is becoming easier for small businesses to get answers. You may still need someone to review the responses.

Here are some basic questions that you can ask, which will at least give you a starting point to evaluate. Remember to document the answers, which will be an important part of your compliance documentation generally:

Information security and privacy questions

  • Does your organization process personally identifiable information (PII) or protected health information (PHI)?

  • Does your organization have a security program?

  • What standards and guidelines does it follow?

  • Does your information security and privacy program cover all operations, services and systems that process sensitive data?

  • Who is responsible for managing your information security and privacy program?

  • What controls do you employ as part of your information security and privacy program?

  • Provide a link to your public information security and/or privacy policy.

Physical and data center security questions

  • Are you in a shared office?

  • Do you review physical and environmental risks?

  • Do you have procedures in place for business continuity in the event your office is inaccessible?

  • Do you have a written policy for physical security requirements for your office?

  • Is your network equipment physically secured?

  • What data center providers do you use if any?

  • How many data centers store sensitive data?

  • What countries are your data centers located in?

Web application security questions

  • Do you have a bug bounty program or other way to report vulnerabilities?

  • Does your application have a valid SSL certificate to prevent man-in-the-middle attacks?

  • Does your application require login credentials?

  • How do users get their initial password?

  • Do you have minimum password security standards?

  • How do you store passwords?

  • Do you offer single sign-on (SSO)?

  • How can users recover their credentials?

  • Does your application employ a defense in depth strategy? If so, what?

  • How you regularly scan for known vulnerabilities? 

  • How do you do quality assurance?

  • Do you employ pentesting?

  • Who can we contact for more information related to your web application security?

Infrastructure security questions

  • Do you have a written network security policy?

  • Do you use a VPN?

  • Do you employ server hardening?

  • How do you keep your server operating systems patched?

  • Do you log security events?

  • What operating systems are used on your servers?

  • Do you backup your data?

  • How do you store backups?

  • Do you test backups?

  • Who manages your email infrastructure?

  • How do they prevent email spoofing?

  • How do you protect employee devices from ransomware and other types of malware?

  • What operating systems do employee devices use?

  • Are employee devices encrypted?

  • Do you employ a third-party to test your infrastructure security?

  • Who can we contact in relation to infrastructure security?

Another vendor management guidepost to use is a security rating or certification. Especially if you are using a very large vendor, security ratings should be easily monitored and will provide a very basic reassurance that you are acting “reasonably”. There are a number of popular ratings companies, and there seems to be a battle for dominance in that field as of late. Here is one. SOC2 certification is also a good starting point when evaluating the security of a potential vendor.

In a later post, I will discuss more about security ratings, SOC2 and other certifications, as well as third-party monitoring and audits. Stay safe out there…

Read More
James Long James Long

Who’s The Boss?

Re-posted from intothecyberbreach.com, originally published on May 12, 2020.

I like to talk a lot about how the NY SHIELD Act puts proactive requirements on every business that handles New Yorkers’ personal information. Meaning that the information businesses (like law firms) store about their clients is subject to the SHIELD Act.

No, literally. I just gave two presentations on this issue in the last couple weeks and am booked to present to the New York Bar Association tomorrow. (You can register here, wink). It’s hard to catch me not mentioning this multiple times per day.

But, it’s easy to forget that the reason for these requirements is to minimize your risk of data breach.

Who wants to be the guy who has to call “THE BOSS” (aka Bruce) and tell him that his personal information has been hacked?

Last week, Variety reported that entertainment law firm Grubman Shire Meiselas & Sacks was subjected to a major data breach of 756 gigabytes of documents regarding several well known music and entertainment figures, including: Lady Gaga, Madonna, Nicki Minaj, Bruce Springsteen, Mary J. Blige, Ella Mai, Christina Aguilera, Mariah Carey, Cam Newton, Bette Midler, Jessica Simpson, Priyanka Chopra, Idina Menzel and Run DMC.

The thing that I found interesting about this story was the firm’s statement to Variety: “We have hired the world’s experts who specialize in this area…” Notice they didn’t say, “we have implemented our incident response plan and our cybersecurity response team has been working on it as soon as they became aware of the issue.” Their response suggests they were caught flat footed, i.e., that their response was to hire someone. I would hate to have to explain to the Boss why they don’t already have a plan in place for this sort of thing. Perhaps if they had a plan in place, it might not have happened in the first place?

I wonder what kind of client list they will have next year.

Read More
James Long James Long

How I Learned To Stop Worrying And Love Working From Home.

re-posted fromintothecyberbreach.com, originally published on April 24, 2020.

I should preface this post by noting that it has almost nothing to do with cybersecurity…

What a strange time. 2020 has been, just… a really weird year so far. First of all, it’s hard to believe that we are about one-third of the way through the year. Next week it will be May. Most of us are living in this Groundhog Day-like situation where everyday is the same house, apartment, same haircut, same pants, same leftovers. The weather in Syracuse suggests it is closer to February 2nd than April 24th.

The little things that we took for granted are often not available to us any more. Like having places to go. And I’m sorely missing my jiu-jitsu gym. Frankly, that is really just putting a light-hearted spin on things, because there are a number of us out there losing loved ones, fighting for their lives, and fighting for their sanity while they work tirelessly to help others (thanks btw!).

But this is a blog, and while it is a professional blog (i.e., about cybersecurity law) it feels a little disingenuous to ignore the personal during this crazy time. Fact is, 2020 has been especially weird for me because I’ve actually been working from home since February, more than a month before COVID-19 really hit New York hard. Last year, I made the decision that I wanted to own the means of my production, and I began working in earnest to make that happen. I explored a number of options, and promised myself that I would make something happen by the end of the year. In January, an opportunity came up that would allow me to build something from scratch, and I can honestly say it has been one of the most important decisions of my life. Everything is about to change. It is a little early to say, but it may also turn out to be one of my best decisions as well.

It’s been a bit like a pendulum being here. My first month as a law firm owner was about getting my footing. I left a firm that, while it had a lot of problems, also had a lot of people that I cared about. I had hoped my benevolent overlords would understand where I was coming from when I told them I was considering going out on my own, but they didn’t. When I finally saw that, the decision got much easier.

At home, I set up a nice office in what used to be a guest bedroom. It was a rough start during Winter break with my kids. Little did I know the kids would be a fixture in this office for months. In the first week of March, I signed a lease to open the new firm’s office in downtown Syracuse, and while I’m thrilled about my new digs, the lease doesn’t start until June, leaving me, at home, whether the state tells me to or not.

Anyway, I began this process as one of a minority of Americans who work from home, to one of millions. I had worked from home previously, before my legal career, when I freelanced as web programmer. Between those times, and all that has gone on now, I’ve come away from a number of tips, tricks, and hints at staying sane. Hopefully you find some use here.

  1. Have a dress code. It doesn’t have to be anything fancy. If you get up and get dressed for the day, you are going to find that you feel more productive and more “yourself.” The temptation to sit on the couch in your jammies all day is strong, but it is a false idol.

  2. Have a routine. Apparently this is one of the first things they tell you in prison. I haven’t been to prison, but this lockdown has all of us learning a little about how to cope in confinement. I get up at the same time as I would if I was in the office. I work until “quittin time”, every weekday. Sure, I take a lunch break, sometimes talk a walk, but I keep regular office hours. Even when no one is looking.

  3. That goes for eating too. Eat three square meals, or whatever is “normal” for you. A lot of us started this thing with crazy stress eating. Understandable. But now that we are on month two of this thing, plan your meals. I’m not saying you need to diet. In fact, right now would be an extremely difficult time to start a diet. I’m just suggesting that second or third breakfast, and the dessert with lunch, is because you’re bored and stressed, not because you are hungry. Again, the reason to get into the routine is that you will feel better in short order.

  4. Get some exercise. Even if you aren’t the fit type, do something with your body. You will feel better. You don’t have to do anything crazy. Take walks. Lift something over your head. As I’m writing this, I’m in the middle of an online friendly contest to see who can do the most pushups in an 8 hour period. Even if you lose, you win. There are ton of good videos on youtube that you can get you going as well. I’m a big of Yoga with Adriene, especially if I’m feeling stressed.

  5. Find the silver lining. I designed a marketing plan on the March 21st SHIELD Act deadline in New York. But, by the time March 21st came, sure, I fresh off of some early victories in the launch of the firm, but with a brand new business, a family looking to me for financial support, the courts shutdown, and business grinding to a halt this new adventure was feeling pretty scary. I decided I could choose to look at this as a source of distress or as an opportunity. You can’t control this. All you can control is how you react. For me, the scary thing is that when you’re starting a new business, there is a lot to do, not a lot of resources to do it with and your competition can grind away and outperform you while you are just getting started. The mortgage comes due whether you make money or not. But, the opportunity here is that everyone else is kind of in the same position as me. In fact, I had an extra month to set up my home office and get into a routine. My home printer is just as good as everyone else’s. I have work to do (if this happened a month earlier, I’d probably be screwed.) Suddenly, the federal government is talking about supporting small business and help is available in a way that it wasn’t when I first launched. If anything, the system has changed to my advantage. Maybe those things don’t apply to you, but the point is to find the advantages here. Find the opportunities presenting themselves. If you are having a hard time finding the silver lining in your situation, check out this video. I go to it a lot when I’m feeling discouraged.

  6. Not every day is going to be a win. I’d be lying if I told you that I’m productive all day every day. I’m not. But the striving gets me a long way there. Could I do better? Probably. But, by staying focused on the goal, I’m getting where I need to be. Sometimes you gotta just take the “L” for the day and move on. New day, new grind. Go do it.

  7. Be compassionate. There is a lot of “together” time in a quarantined family of four. I was an only child. I like people a lot. I also like alone time. A lot. If I don’t get alone time, I get cranky. But guess what? My wife has requirements also. My kids do too. We all have little quirks. Don’t let things fester with your team. Have patience. Communicate. Be nice. Say “sorry” when you’ve been a jerk. I guarantee that in the last two months, at least once, we’ve all been jerks to someone.

  8. You have no excuses. This pandemic may be remembered as the golden age of memes. Here is one of my favorites:

The takeaway is that right now, you have no excuses not to make your dreams come true, or at least work towards them. Use this time to come out of this with a new skill, new hobby, or new experience. Think about all of things you told yourself you would do if you had time. Now go do them.

9. Productivity is good. I’m seeing a lot of articles out there talking about how expecting to be productive during this time is not reasonable. I think that is silly, and not especially helpful. First, I would note that those articles were all written by people who managed to get up out of bed and go write an article. More importantly, getting things done feels good. Sitting around and “waiting” is more likely to drive you nuts. Sure, this is a stressful time. Perhaps its not realistic to expect your productivity levels to remain the same as they were before all this. Like I said, don’t beat yourself up over it. But, finding meaning in what you do is the hallmark of a healthy life. Keep striving towards progress.

10. Your mileage may vary. Don’t compare yourself to others. The things I struggle with may come easy to you and visa-versa. Accept it, roll with it, and do your best.

Read More
James Long James Long

The Big Day.

Re-posted from intothecyberbreach.com, originally published on March 24, 2020.

This past Saturday, March 21st, was the day the New York SHIELD Act required all businesses with New Yorkers’ personal information to comply with new “reasonable safeguard” requirements, proportionate to the size and scope of the business.

My firm has been focused on this day for a while now. But the world feels, somehow, vastly different than it was just a month ago. Focus changes, priorities change.

In some ways, cybersecurity risks loom larger than ever. There are reports of cyberattacks on hospitals and U.S. agencies. There are warnings of a coming surge in fraud schemes and other malicious scams. On the other hand, all non-essential businesses are closed, including most of the legal profession and court system.

Here is what we know hasn’t changed. Bad actors have been attempting to take advantage of your personal data for a long time. That remains constant. With so many businesses working from home, or working on a system in which they are not yet fully comfortable, the opportunities for those bad actors to take advantage are clearer than ever.

Budgets change. Focus changes. Priorities change. But if you’ve got a business, you need to take steps NOW. Just like you don’t cancel your insurance policy when a storm is coming. I think we can all safely say, the cybersecurity storm is on its way.

My own view is that while compliance for the sake of avoiding state enforcement, is probably not your top priority for today, those “reasonable safeguards” required under the law are a MUST to avoid further business disruptions during and after the pandemic. Those interruptions could prove fatal to many businesses. So if you aren’t going to do it for THEM, do it for YOU.

Be safe out there.

Read More
James Long James Long

Is Cybersecurity Insurance A Sword Or A Shield?

Re-posted from intothecyberbreach.com, originally published on March 5, 2020.

Just some quick thoughts on cyber insurance. As insurers get more sophisticated in how they cover cyber incidents, businesses need to get more savvy as well. This isn’t a zero sum game. As a business owner, you NEED insurance. And as an insurer, the carrier wants to calculate the risk as accurately as possible. In the old days, cyber incidents might fall into traditional areas of coverage (e.g., business interruption). But, now we’ve got proactive security requirements coming out of the states. CCPA only applies to mid-size or larger businesses. However, here in New York, even if you are a small business you need to have SOME program in place (e.g., “reasonable safeguards” taking into account the size and scope of your business). Personally, I don’t think cybersecurity compliance has to be rocket science. But any way you slice it, doing nothing is not a smart option.

I think what you will find is that, going forward, doing nothing might also get your coverage pulled. At what point is non-compliance with SHIELD or CCPA going to be considered reckless, and therefore not insurable? I have a feeling we are going to start finding out the answer soon.

Read More
James Long James Long

All Your Hospital Are Belong To Us.

Re-posted from intothecyberbreach.com, originally published on February 15, 2020.

This morning, I ran across a 2014 article on Wired.com, which goes on to explain that hospital medical devices and other related gadgets (what we would today call IoT or the “Internet of Things”), are shockingly easy to access via the wireless network, and vulnerable to abuse by would be hackers. For some reason, the article reminded me of an old meme from the early 2000s, hence the name of this post. I ended up down a bit of a Wired.com rabbit hole, which I figured I’d share with you.

Back in 2014, they reported on a study that found “drug infusion pumps–for delivering morphine drips, chemotherapy and antibiotics–that can be remotely manipulated to change the dosage doled out to patients; Bluetooth-enabled defibrillators that can be manipulated to deliver random shocks to a patient’s heart or prevent a medically needed shock from occurring; X-rays that can be accessed by outsiders lurking on a hospital’s network; temperature settings on refrigerators storing blood and drugs that can be reset, causing spoilage; and digital medical records that can be altered to cause physicians to misdiagnose, prescribe the wrong drugs or administer unwarranted care….” as well as discovering “they could blue-screen devices and restart or reboot them to wipe out the configuration settings, allowing an attacker to take critical equipment down during emergencies or crash all of the testing equipment in a lab and reset the configuration to factory settings.”

I assumed that given the article was almost six years old, the security situation in hospitals would be markedly improved. My initial research has not borne that out exactly. By 2017, Wired was reporting that “Medical Devices are the Next Security Nightmare.” A little weird, if you ask me, since they identified the issue three years earlier, but I digress. Wired reported that while the FDA has begun providing guidance on cybersecurity concerns, they also noted that a significant percentage of medical devices were running on outdated operating systems or technology that is no longer supported with security patches, and has already gotten through FDA approval and into common useage. Instances of Windows XP (which was released in 2001, almost 20 years ago) were found running major hospital computers and connected to various devices (they cited an average of 10 to 15 connected devices per bed, with a large hospital having up to 5,000 beds). FDA certainly has stepped up its cybersecurity game since 2017, and they offer great cybersecurity resources for the medical community here.

Fast forward to 2019, Wired reported on a newly discovered vulnerability on devices that have been in use in hospitals for nearly 20 years. The problem, as put by one cybersecurity analyst, is that “once you identify what is vulnerable, how do you actually update these devices? Often the update mechanism is almost nonexistent or it’s such an analog process it’s almost like it’s with a screwdriver. It’s not something that can be done at scale. So I don’t know if it will ever be accomplished to update all of these machines.”

But its never enough to just identify the problem and put our hands in sky. HIPAA has long required notification for security breaches of personally identifiable health information. But newer data privacy laws like NY SHIELD, CCPA and GDPR take data security a step further by expanding the definition of protected private information. For instance, NY SHIELD considers a username and password combination to be protected private information that businesses are required to safeguard. For all of the efforts complying with HIPAA, healthcare organizations at risk of noncompliance (pronounced, “law enforcement”) in regards to state data privacy laws.

So, the good news is that the FDA is aware of the issue, and there appears to be somewhat less of a “wild west” attitude towards IoT medical device security. The bad news is that 2020 is predicted to be a banner year for ransomware and medical device cybersecurity concerns generally.

Read More
James Long James Long

6 Ways To Beef Up Your Email Security.

Re-posted from intothecyberbreach.com, originally published on February 10, 2020.

I have been setting up a Microsoft Exchange email server for a new project of mine that is related to my data privacy law practice. I hope to make an announcement sometime this week as to what the new project will look like. It’s all good stuff.

As I’m setting up my email server, I’m thinking about what steps I need to take to increase my own cybersecurity. It is obvious that I need to practice what I preach. So, here are some of the things that I’ve been implementing for my own business email:

Backups. Backups. Backups. Backups. Backups.

Everyone understands the concept of backing up your data. But backups are not a “set it and forget it” type of thing. What is being backed up? How? Where are the backups stored? How do you go about retrieving it? Do your backups work, are they secure themselves? There is a small section of hell where lost souls are punished by having their computers AND their backups destroyed in the same catastrophe (by a fire, obviously). Don’t be one of those souls.

I’ve been burned by not backing up very recent personal data. (See what I did there?) If you save anything at all on your computer’s hard drive, you are likely guilty of this. It is really frustrating. Especially when you know better. I put this at the top of the list, because if you haven’t recently backed up all of your data, then you are setting yourself up for heartbreak. Frequency is an issue, retrieval is an issue, and all of this stuff needs to be tested.

You can run a cloud drive like OneDrive, Dropbox, Google Drive, which have some security features built in. Make sure you understand what you are signing up for though. Free services are often free because your data will be mined. You might not care about that. As a lawyer, I have to care about that, because allowing Google to read my attorney-client communications can defeat the attorney-client privilege. So, there are pitfalls, and you need to know them. I don’t do any legal work on my free gmail. The paid-for Google Suite is more private, but I can’t say I’m very trusting of Google generally. So, I went more traditional with Microsoft.

You may want to also keep an external hard drive handy that is solely for the purpose of routine backups. I find these are the easiest to retrieve, but that is a two-way street. You need to make sure the backup drive itself is password protected and secure. Out of view is ideal. If anyone can just plug it in and access your files, all you are doing is creating more security holes. The thing with any of these methods is that its easy to forget your email data. Fortunately, Exchange backs up emails automatically and they should be accessible by anyone with admin privileges. Do yourself a favor though and attempt a retrieval now, BEFORE you actually have to.

Multi-factor identification

This should be at or near the top of your list (behind backups, if that is not already being done regularly). For the uninitiated, multi-factor identification (also known as two-step authentication) is a process you may have noticed on a lot of online applications that ask you to verify your login information by also putting in a code on your cell phone. Online banking was one of the early adopters. It can be done in a variety of ways. It may be a a text message, an app, or a follow up on subscription requests with a confirmation email that you have to click on. These are all examples of multi-factor identification.

Gmail and Microsoft Exchange both have a two-step authentication setting. When turned on, you will get a security code sent to your phone or a backup email. Both systems also have authenticator apps that can streamline this process a bit.

I was slow to adopt this at first, because of the times it might slow down your workflow. But actually, when I was forced to use it through various applications, I got used to it pretty quickly, and found it to be a good way to keep out the bad guys. If you are a law firm, or another repository of someone’s personal information, especially in email form, this is a really cheap and easy way to prevent a breach. Remember, the new law in New York is that even if data is only “accessed”, it can trigger a data breach event that must be reported to law enforcement and effected consumers. Even if the data is accessed inadvertently and non-maliciously, the law requires a five year documentation period. Two-step authentication can help prevent those very simple incidents.

Practice “Least Privilege”

Biggie Smalls might have said it best: : “Number three, never trust nobody.” If he were alive today, surely he would advocate for “Least Privilege” and “Zero Trust” security frameworks.

“Least Privilege” means that every user has the least privileges it can possibly get by with to perform its function. So, rather than giving your user name all admin privileges, you would have a user for your day-to-day work, and then a separate admin user only for performing administrative functions. Consider whether every employee should access to every file. In Microsoft Office Suite, you can set up multiple admins that are limited in the things that they can do (one would be an exchange admin, another would be able to change user passwords, etc.) The more you are able to separate these roles, the better.

“Least Privilege” is related to the framework for “Zero Trust” which is, I’m sure, going to be in the running for one of the most popular catch-phrases/buzzwords in 2020. The concepts are related, yet distinct. What they share is the idea that just because a user has gained access inside your network, doesn’t mean they should be given the keys to the company car (metaphorically).

As a lawyer, my office could have clients, adversaries, vendors, employees, and lost visitors looking for another office, on any given day. Unfortunately, any one of those people may intend harm on my system, or may just be an accident waiting to happen. You have to verify each step of the way. One example “zero trust” is to get rid of the idea that once you are connected to the network, that somehow entitles you to access the cloud. It doesn’t and it shouldn’t. Further, once inside the cloud, it doesn’t entitle you to access the entire system.

Develop Basic Security Literacy Within Your Organization

When the Nigerian Prince comes knocking, don’t let him in. Most of us understand that on a basic level. But in business, the scams are more sophisticated. Recently, I’ve received a few emails purportedly from one of my co-workers asking when I will be in the office. Another colleague received a similar email from me, asking for their help with an emergent issue. Those emails set off red flags because my colleagues and I were peers, and the language of the email was clearly designed to invoke fear of one’s supervisor. But, with a different target, or a different sender, I suspect they would have gotten a response from someone in my organization. Of course, the reply email isn’t actually the person you are expecting, it goes elsewhere, and who knows what kind of information they can gather. It is limitless.

So, you need to train the users in your organization on the basics of information security. My rule of thumb is that if someone is sending me something in an email and claiming it is an emergency, I follow up with a phone call. You don’t have to explain to the person why you are calling, you can just say that you want to make sure you get it right the first time. You may find that the person you thought was emailing you has no idea about the email.

Another way this is done by hackers is to take over one email account and use it to gain information from other people. So the email address itself could even be legitimate, but not actually sent from the person you expected. Imagine getting an email from your spouse that says something like “Hi Honey, I’m at the store and my debit card won’t work, can you send me your credit card number to try and use yours?” That’s a really simple scam and all it requires is access to your email. If the hacker doesn’t change the password, they might even be accessing it without anyone’s knowledge.

There is a lot of anti-phishing, anti-scamming educational materials out there online. So, I’m not going to reinvent the wheel here. Just look into it, and make sure your team is trained on this stuff.

Physical Security Is A Necessary Part of Information Security

You can have all of the bells and whistles in regards to password usage, training of employees, and backups, but if someone can just find your phone in the park and access your email without some sort of passcode, then you aren’t secure. Conversely, if you are sticking random USB drives into your computer, then all of those passcodes aren’t going to help you.

Movies and television would have you believe that hacking looks a lot like the Matrix, with some trendy electronic music blasting in the background, and an exciting GUI with colorful lines of code streaming across the screen. Hacking can be a version of that. Although leather pants are far less popular in the hacking community than the Wachowski brothers would have you believe. More often, it’s just a person on a phone, asking the right questions, being friendly to receptionists, and charming their way into our hearts (and data). Be wise to what social engineering looks like. Remember that getting your purse snatched can constitute a data breach under many state laws, if you are holding electronic devices that contain other people’s personal information.

Password Management

My view on password management starts to make more sense once you’ve thought about physical security. A lot of companies are still having employees change their password every few months. I don’t advocate for that. For the last 20 years or so, I’ve held the view that a person who does not know their own password may be as dangerous to the system as a person who has a very weak password. Password managers have softened that view a little, but let me explain the thinking.

If you are unable to reuse your passwords, and must change them every few months, the chances that an employee is going to write down their password and stick it to their monitor becomes much higher. In that instance, the organization went from very high security to a situation where the cleaning crew, all visitors, other co-workers, and all sort of potential invaders can plainly see your password. Now, this may be less of an issue for you if you are practicing two-step authentication. But, if your work computer is considered a “trusted” computer, you may still end up in a bad spot. I would rather that people have a password they can memorize and not have to write down, than have them use random digits and letters that have to be written out and left on their desk.

That said, reusing the same password repeatedly across systems is still considered poor practice, and remembering all of those passwords for all of those different accounts gets pretty challenging. For those reasons, you may want to consider using a password manager. Yes. They CAN be hacked too. But the data tends to be encrypted, and I still think the risk is lower than doing it as described above. I’ve seen good recommendations on 1password ($3 per month) and bitwarden (free for personal, $5 /month for business). I’m going with bitwarden, but there are a lot of good options out there.

Conclusion

As Biggie once said, “follow these rules and you’ll have maad bread to break up.” The last recommendation I can offer is to get a professional to look at your system if you are able. You don’t have to have an IT department to have a secure system. Most parts of the U.S. have plenty of IT firms that would be glad to come to your home or office and figure out what you can do to be more secure. These are just the starting points and steps that I’m taking. There is always more to do, and evolution is part of the security game.

Last, none of what I’ve said here ensures compliance with any data privacy laws. This is technical advice from my personal experience. So, don’t take it as legal advice for what you need to do in your state, and don’t take it as a definitive version of everything that an IT pro would suggest either.

Stay safe out there!

Read More
James Long James Long

California Legislature Makes Last Ditch Amendments to CCPA

Re-posted from intothecyberbreach.com, originally published on September 17, 2019.

The CCPA, which remains set to go into effect on January 1, 2020, was amended with no less than five Assembly bills last week. The amendments, covered below, are awaiting Governor Newsom’s signature, as is Assembly Bill 1202, which requires data brokers to register with the California Attorney General. The Governor has until October 13, 2019 to sign. These were passed as separate bills, so it is possible the Governor could accept some and reject others. However, given the dominance of Democrats in the legislature and governor’s office both, the Governor is expected to sign.

Change is always exciting, but perhaps the biggest news out of this round of amendments is that no additional amendments to the CCPA are expected before it goes into effect on January 1st. So, while I used to tell friends at cocktail parties that the CCPA could be delayed until the spring, I now tell them that life as they know it will end on New Year’s Day. Yeah, I don’t get invited to much anymore.

For the most part, I view these as positive changes. I’ve heard them described as “pro-business” amendments, which is fine. I see them more as effort to make the CCPA easier to understand, and a steering away from definitions that confuse more than clarify. A brief description of each pending bill is below.

Assembly Bill 25 exempts for a period of one year any “Personal information that is collected by a business about a natural person in the course of the natural person acting as a job applicant to, an employee of, owner of, director of, officer of, medical staff member of, or contractor of that business to the extent that the natural person’s personal information is collected and used by the business solely within the context of the natural person’s role or former role as a job applicant to, an employee of, owner of, director of, officer of, medical staff member of, or a contractor of that business.” According to the Assembly’s comments on the bill, “the one-year sunset provides the Legislature time to more broadly consider what privacy protections should apply in these particular employment-based contexts, and whether to repeal, revise, and/or make these exemptions permanent in whole or in part moving forward.”

Assembly Bill 1146 removes the right to opt out from vehicle information or ownership information retained or shared between a new motor vehicle dealer and the vehicle’s manufacturer, if the information is shared for the purpose of effectuating or in anticipation of effectuating a vehicle repair covered by a vehicle warranty or a recall, as specified. The bill would define terms for that purpose. The bill would also except from the right to request a business to delete personal information about the consumer the personal information that is necessary for the business to maintain in order to fulfill the terms of a written warranty or product recall conducted in accordance with federal law.

Assembly Bill 874 defines “publicly available” to mean information that is lawfully made available via government records. The bill also clarifies that personal information does not include deidentified or aggregate consumer information and that personal information includes information that is “reasonably capable” of being associated with a particular consumer or household, as opposed to “capable” of being associated. excludes deidentified or aggregate consumer information from the definition of “personal information.” This distinction is not so much a policy change, but a recognition that the CCPA as originally written was over inclusive of data that could in theory, possibly, maybe, someday, be used to identify an individual.

Assembly Bill 1202 requires data brokers to register with, and provide certain information to, the Attorney General. The bill would define a data broker as a business that knowingly collects and sells to third parties the personal information of a consumer with whom the business does not have a direct relationship, subject to specified exceptions. The bill would require the Attorney General to make the information provided by data brokers accessible on its website and would make data brokers that fail to register subject to injunction and liability for civil penalties, fees, and costs in an action brought by the Attorney General, with any recovery to be deposited in the Consumer Privacy Fund.

Assembly Bill 1355 refines the existing FCRA exemption to ensure it applies to any activity involving the collection, maintenance, disclosure, sale, communication, or use of any personal information regarding a consumer’s credit worthiness, credit standing, credit capacity, character, general reputation, personal characteristics, or mode of living by a consumer reporting agency to the extent such activity is subject to the FCRA with some exceptions. Second, the CCPA generally will not apply to business-to-business communications and transactions for a period of one year. Third, the CCPA does not require businesses to collect or retain information they would not collect in the ordinary course of business or retain it for longer than they would otherwise retain such information in the ordinary course of business. Fourth, data that is encrypted, or if it is redacted is not covered by the CCPA’s data breach protocol. Lastly, the Attorney General is given authority to promulgate regulations to effectuate certain aspects of the CCPA.

Assembly Bill 1564 provides that a business that operates exclusively online and has a direct relationship with a consumer from whom it collects personal information is only required to provide an email address for submitting requests for information required to be disclosed, as specified.

Read More
James Long James Long

CCPA Begins, NY SHIELD Explained.

Re-posted from intothecyberbreach.com, originally published on January 28, 2020.

As of January 1, 2020, the California Consumer Protection Act (CCPA) went into effect. I’m going to dig a little deeper into how that seems to be playing out later, but the purpose of this post is really just to mark the occasion. And also, to point out that the second installment of NY SHIELD is coming into effect in March 2020. For both of these acts, you don’t have to be located in California or New York for the law to apply to you. A lot of companies are starting realize this, and are scrambling. The good news is that if you are a larger company that is CCPA compliant, pre-incident, you are on the right track for New York too. Although the requirements for both are not equivalent. National companies (i.e., all internet-based businesses) will have to do separate compliance for both. But, if you are New York-centric, you are probably breathing a sigh of relief that the NY SHIELD ACT does not create a private cause of action against companies for data breach. (Unlike California). However, there are still pitfalls aplenty. Specifically, On October 23, 2019, the Stop Hacks and Improve Electronic Data Security Act (the SHIELD Act) imposed data breach notification requirements on any business that owns or licenses certain private information of New York residents, regardless of whether it conducts business in New York. In March 2020, the second part of the Act requires businesses to develop, implement and maintain a data security program to protect private information.

We haven’t focused on NY SHIELD as much (and I suspect that will change soon), so, just to re-cap, New York’s new data privacy law:

Expands When A “Breach” Is Triggered

Under the old rules, for a security incident to be called a “breach” and thus trigger the state’s breach notification requirements, there must be an “unauthorized acquisition or acquisition without valid authorization of computerized data that compromises the security, confidentiality, or integrity of personal information maintained by a business.” In English, that means that someone (or something) must “acquire” the data. Typically, that means they must access the data, AND come away with it. In other words, under the current law, a breach is not triggered by merely hacking into a server and seeing that there are a number of files containing personal information. The hacker would also have to take the files, or open them and record them somehow. The hacker would have to walk away with some ability to recall or review those files, whether it is by copying them, or some other means. That was then. This is now.

The NY SHIELD Act expands the definition of a breach by including ANY unauthorized access. That means if our hypothetical hacker gains access to your server, but never copies the personal information in the server, this would still count as a breach and would require breach notification.

Expands The Meaning of “Private Information”

The NY SHIELD ACT expands the definition of private information to include a combination of any personal identifier, and any account, credit, or debit card number, if it is possible to use that number to access an individual’s financial account without any additional identifying information OR a combination of any personal identifier and certain biometric information OR a username and password combination that would give access to an online account.

All of this creates interesting possibilities for what could be considered private information. For instance, your username and password to even the most useless online accounts could trigger a breach notification requirement. Further, under the biometric category, this could include your name and a picture of your face, since a picture of your face is, after all, “data generated by electronic measurements of an individual’s unique physical characteristics, such as a fingerprint, voice print, retina or iris image, or other unique physical representation or digital representation of biometric data which are used to authenticate an individual’s identity.” What feature is better at authenticating your identify than your face? Suddenly, unauthorized access to the school yearbook committee’s folder may become a notifiable incident. I’m going to stay out of the debate as to whether this is a good idea or a bad one, but most people can agree that it represents a significant expansion.

Creates New Obligations For Keeping Private Information Secure

The NY SHIELD ACT creates an obligation to maintain “reasonable” safeguards starting in March 2020. The word “reasonable” is a favorite among attorneys, especially attorneys who bill by the hour. Here, mid-size and large companies have specific milestones they must meet. For smaller companies, reasonability will be judged typically in terms of what precautions have been made. Basic stuff like multi-factor authentication should be a given. Implementing a company-wide security protocol, and identifying key players to run said program are also going to count towards “reasonable”-ness. I would argue anything that shows proactive steps, and preparedness will go a long way.

So, one question that the business community may have is what happens if they do not take reasonable safeguards? That can get complicated. True, the great state of New York may impose fines of up to $5,000 per violation. But, the consequences might be worse than that. For instance, would your insurance policy still cover you if you haven’t complied with the law? Suddenly that litigation or that business loss may be uninsured. That sting is going to exceed $5,000 very quickly.

As I alluded to, the Act takes size into account. For business with fewer than 50 employees, less than $3 million in gross revenues in each of the last three fiscal years, or less than $5 million in year-end total assets, those small businesses must maintain “reasonable administrative, technical and physical safeguards that are appropriate for the size and complexity of the small business, the nature and scope of the small business’s activities, and the sensitivity of the personal information the small business collects from or about consumers.” For businesses larger than that, they must implement a data security program containing the administrative, technical and physical safeguards enumerated in the law (see below). Thus, while CCPA has been getting all of the attention. The NY SHIELD ACT puts a number of requirements on companies that are too small for the CCPA to cover. The enumerated reasonableness requirements are as follows:

According to § 899-bb(2)(b)(ii)(A), organizations can Implement reasonable administrative safeguards by:

  • Designating one or more employees to coordinate the security program

  • Identifying reasonably foreseeable internal and external risks

  • Assessing the sufficiency of safeguards in place to control the identified risk

  • Training and managing employees in the security program practices and procedures

  • Verifying that the selection of service providers can maintain appropriate safeguards and requiring those safeguards by contract

  • Adjusting the security program in light of business changes or new circumstances

According to § 899-bb(2)(b)(ii)(B), organizations can establish reasonable technical safeguards by:

  • Assessing risks in network and software design

  • Assessing risks in information processing, transmission, and storage

  • Detecting, preventing, and responding to attacks or system failures

  • Regularly testing and monitoring the effectiveness of key controls, systems, and procedures

According to § 899-bb(2)(b)(ii)(C), organizations can create reasonable physical safeguards by:

  • Assessing risks of information storage and disposal

  • Detecting, preventing, and responding to intrusions

  • Protecting against unauthorized access to or use of private information during or after the collection, transportation, and destruction or disposal of the information

  • Disposing of private information within a reasonable amount of time after it is no longer needed for business purposes by erasing electronic media so that the information cannot be read or reconstructed

Expands Breach Notification Requirements

When a New York resident’s personal information is accessed without authorization, under the NY SHIELD Act, the affected New York residents, the New York Attorney General, the New York Department of State, and the New York State Police must be notified of the breach. If the breach affects more than 500 New Yorkers, you will have 10 days from the date the breach if discovered to notify the attorney general, and the fines for noncompliance have increased as well. Further, if over 5,000 residents were affected by the breach, notification must also be made to consumer reporting agencies.

Take Aways

I think the take aways from where we sit right now is that the NY SHIELD Act is about to cause a scramble similar to the one we are seeing in California. New York companies are going to need to get compliant, or risk enforcement. Is the Attorney General likely to start prosecuting violations on March 1st? Doubtful. But the writing is on the wall. And unlike the CCPA, even the little guys are affected.

Are you a startup trying to figure out to get NY SHIELD compliant (hint: do you think your investors might ask about this?) Now is the time to get with the program. Reach out to me at jlong@long.law if you want to schedule a free consultation on data privacy compliance.

Read More
James Long James Long

Are You Liable for the Data Shenanigans of Others? (Part 2 – Controllers and Processors)

Re-posted from intothecyberbreachcom, originally published on September 5, 2019

In Part 1 of this post, we laid a framework for the legal landscape for American businesses and their potential for exposure to State and International law regarding data privacy, very broadly. If you missed it, and you could use a 30,000 foot view, its here.

Now that you know the basics behind GDPR and CCPA, what responsibilities or liabilities do you have in regard to entities that process data it got from you. Let’s walk through a scenario to illustrate what I mean…

Say you’ve got a website that attempts to develop a mailing list or subscriber list. It’s a site about designer sneakers, and it notifies your customers on that list whenever certain sneakers that are difficult to locate, are available for sale in their size. The website is owned by you, belongs to you, is run by you. But… somewhere, you’ve got this little snippet of code on the site, which allows users to subscribe to your page, and enter their name, address, email address, phone, and shoe size. Now let’s say that all of that information about your client, gets stored on a website that does NOT belong to you. So, think of a separate contact management application that you have linked into your site, but is run by another company.

Under the GDPR framework, you would be what is a called a “controller” of the data your customer has shared, and the company that handles your contact management system would be the “processor” of that data.

The GDPR defines a “controller” as an entity that “determines the purposes and means of the processing of personal data.” A “processor” is defined as an entity that “processes personal data on behalf of the controller.” So, why do we care?

According to the GDPR, the data controller is primarily responsible for getting consent to collect data (that will be a topic for another day), revoking that consent, as well as notification of any breaches of that data. This is true even though it may be the processor that actually possesses the data.

Regarding revocation… Recall that under the GDPR, you have a right to be forgotten. Anyone located in the European Union can contact an American company and demand that any data about them be removed. Pretty neato for them! Total headache for you!

So, back to our example: You’ve got this lit sneaker shop online, you have a vendor that collects your customer contact information and their shoe size, and someone contacts you and demands to be forgotten. As the data controller, it would be your responsibility to contact the processor and have them remove that data. It might be as easy going onto your admin page on the processor’s website and removing the information. But… data storage is rarely that easy, and it is more likely that you will have to check the processor’s privacy agreement with you (ahem, which you read ahead of time…. right?) and possibly even contact a human to discuss how the data processor handles GDPR rights revocation requests. As a data processor, your vendor then has to comply with that request to remove the data for you. Simple, right? No, of course not. But, if you’ve followed along this far, you’re already a few steps ahead of the game here. Might as well see the ending, no?

As you know, as a loyal reader of this blog, and as a person who has ever shopped at a big box retail store, when a breach happens the company who was breached has to provide notification to the people whose personal information has been affected… So, what happens when the data that came through your website and into the vaults of your third-party vendor gets hacked into? How about if that third-party vendor did something supremely stupid to enable the breach?

Article 28 of the GDPR requires that “where processing is to be carried out on behalf of a controller, the controller shall use only processors providing sufficient guarantees to implement appropriate technical and organisational measures in such a manner that processing will meet the requirements of this Regulation and ensure the protection of the rights of the data subject.” Thus, not only do Europeans spell “organizational” wrong, they also require controllers to only use processors that are GDPR compliant. Thus, if your vendor is doing something supremely stupid, and you had reason to know about it ahead of time, you’ve violated GDPR. Congrats!

This issue recently came up for Delta Airlines, who initiated a lawsuit against its third-party vendor [24]7.ai Inc, after the vendor experienced a data breach in 2017. Delta alleges that its vendor had weak security protocols and had not notified them of the breach for five months. Of course, Delta, itself, has been fending off lawsuits from its own customers as a result of this breach.

Under the GDPR, “[t]he processor shall notify the controller without undue delay after becoming aware of a personal data breach.” Delta alleges that the data pirate that hacked into its vendor’s system had unauthorized access to names, addresses, and payment card information of approximately 800,000 to 825,000 Delta customers in the U.S. The vendor failed to notify Delta of the breach until one month after Delta renewed its contract with the vendor. Further, the vendor contract required that Delta be notified of any breach. The basis of Delta’s suit is a breach of contract and negligence, not GDPR compliance, per se. Be that as it may, many, if not most, vendor contracts from major players nowadays are going to include terms or requirements that the vendor be GDPR compliant, however they choose to define that. That’s a solid endorsement that you should consider similar requirements in your own vendor contracts.

Back stateside, the new California Consumer Privacy Act (“CCPA”), discussed in Part 1, creates a private cause of action for consumers affected by data breach when that breach is caused by a business’s violation of the duty to implement and maintain reasonable security procedures . Naturally, the plaintiffs’ bar will contend that all breaches are caused by a failure to implement reasonable security procedures. How does that affect our example though?

The CCPA is one avenue where your business may face liability when your vendor fails to secure the data that you have provided it. Fortunately, the CCPA only applies to certain businesses. If you are still in startup mode (< $25 million in revenue), chances are the CCPA excludes you, unless you are in the business of buying or selling personal information. While the CCPA does not use terms like “controllers” and “processors”, the concept is a useful one that many teams are already familiar with. Your vendors will attempt to opt-out of any liability to you for a breach, meanwhile, the CCPA squarely puts the onus on you to ensure the safety of the data being used. The CCPA has a private cause of action, which allows not only for state enforcement, but also for private individuals to sue the pants off of you.

So what is the take away?

First, make sure you understand what data is being collected by any vendors that you are working with. Remember, vendors can be anything from the applications that you add to your website to certain backend service providers. Given today’s expanded view of private personal data, it is likely they are collecting something that would trigger GDPR or CCPA.

Second, read your terms and conditions with your vendors. If you are using systems like Mail Chimp, Google Analytics, or any number of other plug-in style apps in your website to gather data, you are unlikely to be in a position to negotiate with them. But at least know what you are signing up for, and decide whether its worth the risk.

Third, if you are negotiating with vendors, don’t accept their denial of liability for their own data shenanigans. They shouldn’t become your cybersecurity insurance policy, but they shouldn’t be creating unchecked liability for you either.

Fourth, consider using GDPR compliance efforts as an opportunity to work with your vendors to be clear about what they are doing, why, how the data is being protected, and what they are required to do in the event things go sideways. Remember that the purpose of a contract is to prevent litigation.

Last, no legal blog post would be complete without an admonition to ask a lawyer and get actual legal advice.

Read More
James Long James Long

Are The New York Department of Health’s New Breach Notification Requirements for Healthcare Providers Actually Authorized?

Re-posted from intothecyberbreach.com, originally published on August 22, 2019.

Early last week, a letter from the New York Department of Health was issued to Administrators and Technology Officers in the Healthcare Industry in New York, which states, essentially, that the NYDOH has implemented a new notification protocol in the event of a data breach at a healthcare facility.

The letter states “We recognize that providers must contact various other agencies in this type of event, such as local law enforcement. The Department, in collaboration with partner agencies, has been able to provide significant assistance to providers in recent cyber security events. Our timely awareness of this type of event enhances our ability to help mitigate the impact of the event and protect our healthcare system and the public health.”

The new protocol is directed to hospitals, nursing homes, diagnostic and treatment centers, adult care facilities, home health agencies, hospices, and licensed home care services agencies.

The letter goes on to note that “Providers should ensure they make any other notifications regarding emergency events that are already required under statute or regulation. For example, a cyber security event should be reported to the New York Patient Occurrence Reporting and Tracking System (NYPORTS), under Detail Code 932.”

Now, I might be accused of being late to the party on this one, since the letter appears to have gone out August 12th. But, surprisingly, I’ve seen almost no coverage of this change, other than here. So, I can probably be forgiven for being slow on the uptake with this one.

I reached out to the DOH regarding what authority or regulation they are relying on to implement this new requirement. Again, I may be slow on the uptake.

According to N.Y. Gen. Bus. Law § 899-aa, “In the event that any New York residents are to be notified, the person or business shall notify the state attorney general, the department of state and the division of state police as to the timing, content and distribution of the notices and approximate number of affected persons.  Such notice shall be made without delaying notice to affected New York residents.” So, that doesn’t say anything about notifying the DOH. Conversely, HIPAA is a federal law, and that requires notification to federal agencies of a breach. New York Public Health Law – PBH § 2805-l deals with reporting to DOH of adverse events, but its definition does not appear to contemplate data breaches as adverse events either.

Title 10, New York Code, Rules and Regulations 405.8 states “(13) disasters or other emergency situations external to the hospital environment which affect hospital operations;” calls for adverse event reporting. This seems overly broad if it is meant to apply to a data breach. Before I stick my foot any further in my mouth, I will admit that I am not a healthcare expert, and maybe there is a clear blue law that authorizes this new protocol. I just haven’t seen what that is yet. I’ll put a pin in this one and see if I can find out.

The reason why I bring it up is two fold:

  1. It seems fishy to me that the letter does not cite any statute of regulation on which it relies for the change in authority. That is somewhat unusual in my experience. That is potentially an issue because If you’ve got agencies that are changing requirements willy nilly, it creates a nearly impossible set of rules to follow (which are likely to be unfair, and not fully vetted in the comment process). It’s going to spell disaster for some poor healthcare facility, and many of those are small businesses.

  2. The letter seems to suggest some not so great advice as well, as it appears to suggest that your first call should be to DOH. Yes, it acknowledges that you have other legal obligations as well (and this is where it maybe this falls under the adverse event reporting requirement), but it ignores a really major issue. So, without further ado, here is some FREE LEGAL ADVICE in the event that your healthcare facility has a data breach: Before you make statements to a public agency about your breach, talk to a lawyer who specializes in this stuff. Doesn’t have to be me, but talk to someone.

Would definitely like to hear from friends and colleagues on this one.

Update: August 30, 2019. It’s been about a week and I have not heard back on my request from the Department of Health as to the basis of their direction in the letter.

Read More