Legal Implications of Electronic Surveillance in Event Security

Hello everyone,
Usually, I create posts based on questions I receive. This post is based on questions I got from my students at UCLA Extension, where I teach an Information Security course.
A recent discussion centered on the legal implications of electronic surveillance in event security. The specific scenario involved a security company providing security for events with high-profile visitors. The company is considering using various electronic surveillance tools such as:
- Sensors to detect Bluetooth devices
- Systems to collect caller IDs and MAC addresses
- Facial recognition technology
- Gathering system data to conduct reconnaissance on visitors
The big question: Can this company legally use such surveillance methods if disclaimers are posted?
Electronic surveillance raises complex legal, ethical, and regulatory concerns. While posting disclaimers can be helpful in establishing implied consent, they do not necessarily override privacy laws at the federal and state levels. Below are some key legal considerations:
1. Federal Laws Governing Electronic Surveillance
Several federal laws come into play when discussing the collection of electronic data at events. The regulation of data privacy and security in the United States is complex, as there is no universal federal cybersecurity statute that governs all industries. Instead, a patchwork of federal statutes and regulatory frameworks applies depending on the type of data being collected, the entity handling the data, and the industry in which the event operates.
Among the most relevant federal laws regulating electronic data collection are:
A. The Electronic Communications Privacy Act (ECPA) (1986)
B. Federal Trade Commission (FTC) Regulations
C. The Computer Fraud and Abuse Act (CFAA)
Let’s first discuss federal laws and regulations. Later, we will cover Supreme Court decisions and state laws.
A. The Electronic Communications Privacy Act (ECPA) (1986)
The Electronic Communications Privacy Act (ECPA) consists of three main sections that regulate electronic data collection and surveillance. Title I, known as the Wiretap Act, prohibits the interception of electronic communications, including phone calls, text messages, and other transmissions sent in real time. Title II, the Stored Communications Act, places restrictions on the collection, access, and disclosure of stored electronic data, such as emails or cloud-based files. Title III, the Pen Register Act, governs the collection of dialing, routing, and addressing information, ensuring that law enforcement and private entities follow legal procedures when tracking metadata related to electronic communications.
The key takeaway is that actively collecting caller ID data or intercepting Bluetooth communications could violate the ECPA. Even if devices emit signals passively, gathering and storing that data without proper authorization could result in legal exposure. Companies and organizations engaged in electronic surveillance, data tracking, or signal interception must ensure compliance with federal regulations to avoid potential enforcement actions and legal liabilities.
B. Federal Trade Commission (FTC) Regulations
The Federal Trade Commission (FTC) plays a crucial role in regulating data privacy and security in the United States. While there is no single, comprehensive federal law governing data privacy, the FTC enforces consumer privacy protections under its authority to prevent unfair or deceptive business practices as outlined in Section 5 of the FTC Act. This means that even in the absence of specific data protection laws, the FTC has the power to take action against companies that mislead consumers about how their data is collected, used, or secured.
In recent years, the FTC has shifted its focus from privacy alone to data security as a fundamental issue in consumer protection. Privacy generally refers to how businesses collect, store, and share personal information, ensuring that consumers are aware of and consent to such practices. However, the FTC argues that privacy protections are meaningless unless companies also implement strong data security measures to prevent breaches, hacking, or unauthorized access to sensitive information.
By emphasizing data security, the FTC holds companies accountable for not only how they collect and use consumer data but also for how well they protect it from cyber threats. Without adequate security safeguards, even well-intentioned privacy policies can fail, leaving consumers vulnerable to identity theft, fraud, and financial harm. The agency continues to push for higher security standards, reinforcing that businesses must take proactive steps to protect consumer data or face regulatory action.
Example: FTC vs. Equifax (2019) – Poor Data Security Led to a Massive Breach
In 2017, Equifax, one of the largest credit reporting agencies, experienced a massive data breach that exposed the personal information of 147 million consumers. The breach occurred because Equifax failed to patch a known security vulnerability, leaving its systems vulnerable to cyberattacks. As a result, sensitive data, including Social Security numbers, birth dates, addresses, and financial information, was compromised.
The Federal Trade Commission (FTC) took legal action against Equifax, alleging that the company failed to implement reasonable data security measures to protect consumer information. Despite handling vast amounts of sensitive financial data, Equifax did not take the necessary precautions to prevent unauthorized access, making it liable under federal consumer protection laws.
As part of the settlement, Equifax agreed to pay $575 million, including $300 million designated for consumer compensation. The settlement also required the company to improve its cybersecurity practices and take steps to prevent similar incidents in the future.
The key lesson from the Equifax case is that legally obtaining consumer data does not absolve a company of responsibility if its data security practices are inadequate. The FTC holds businesses accountable not only for how they collect and use consumer information but also for how well they protect it. Companies that handle sensitive personal data must proactively invest in cybersecurity, regularly update security protocols, and ensure compliance with industry standards to avoid regulatory action and significant financial penalties.
FTC’s Common Law on Data Security: Broad & Expanding Requirements
Unlike traditional privacy laws with explicit statutory requirements, the FTC enforces data security through case law—meaning it establishes legal standards through enforcement actions rather than specific legislation. This has created a de facto “common law” of data security, requiring companies to:
✔️ Minimize Data Collection – Only collect what is strictly necessary.
✔️ Disclose Data Practices Clearly – Avoid misleading terms or vague policies.
✔️ Implement Strong Security Measures – Protect stored consumer data.
✔️ Monitor for Unauthorized Access – Actively prevent and detect breaches.
✔️ Allow Consumers to Opt-Out – Provide users control over their data.
✔️ Ensure Third-Party Security Compliance – Vendors and partners must also follow data security standards.
Example: FTC vs. Facebook (2019) – $5 Billion Fine for Deceptive Data Practices
The Federal Trade Commission (FTC) imposed a record-breaking $5 billion fine on Facebook after determining that the company had misled users about how their personal data was shared with third parties. The case stemmed from Facebook’s involvement with Cambridge Analytica, a political consulting firm that gained unauthorized access to the data of 87 million users. Facebook had allowed this data transfer without obtaining clear and informed consent from its users, violating federal consumer protection laws.
The FTC’s investigation revealed that Facebook had failed to properly monitor third-party access to user information. Despite having privacy policies in place, Facebook did not enforce adequate safeguards to ensure that external parties complied with its data policies. This oversight ultimately led to widespread misuse of consumer data, raising concerns about the company’s commitment to protecting user privacy.
As part of the settlement, Facebook was required to implement a new privacy compliance program designed to strengthen oversight, limit third-party access, and increase transparency in its data-sharing practices. Additionally, the company was subjected to independent oversight, ensuring that it adhered to stricter privacy protocols moving forward.
The key lesson from this case is that privacy disclaimers alone are not enough to protect a company from regulatory action. The FTC has the authority to take enforcement action if it determines that consumers were misled or not given adequate control over their personal data. Even if a company includes disclosures, it must ensure that those disclosures are clear, accurate, and supported by actual business practices that prioritize consumer privacy. The Facebook case underscores the importance of proactive data governance, regulatory compliance, and the need for businesses to take privacy commitments seriously to avoid legal and financial consequences.
FTC & Event Surveillance: What This Means for Security Companies
Event security firms that use electronic surveillance must ensure they comply with FTC expectations, particularly when handling consumer data. This is especially important if they collect device data, such as MAC addresses, caller IDs, or Bluetooth signals, as well as if they implement facial recognition or biometric tracking. Additionally, if these firms share collected data with third parties, such as law enforcement agencies or private vendors, they must ensure that proper safeguards are in place to protect consumer privacy.
Even when disclaimers are posted informing attendees of data collection practices, the FTC can still take action if it determines that the company is not clearly informing attendees about data collection, failing to protect sensitive biometric data, or storing and sharing consumer data without proper security safeguards. The FTC has consistently emphasized that consumer consent alone is not enough if a company’s data security practices do not meet industry standards. Event security firms must take proactive measures to ensure transparency, implement strong security protections, and comply with federal consumer protection laws to avoid regulatory scrutiny.
Example: FTC vs. Retina-X Studios (2019) – Inadequate Data Security & User Consent
Retina-X, a company that developed “stalkerware” surveillance apps, marketed its products as parental monitoring tools designed to track children’s activities on mobile devices. These apps allowed users to remotely monitor calls, messages, location data, and other personal information without the knowledge of the person being tracked. While the company positioned these apps as legitimate parental control tools, they were also widely used for spying on partners, employees, and other individuals without their consent.
Despite claiming to offer secure monitoring solutions, Retina-X failed to implement proper security protections for the data it collected. This oversight left vast amounts of sensitive personal information exposed and easily accessible to hackers. The Federal Trade Commission (FTC) intervened, determining that the company’s inadequate security practices put users at serious risk. As a result, the FTC shut down Retina-X, banning it from selling similar surveillance products in the future.
A critical issue in the case was that users had technically consented to monitoring by agreeing to install the software. However, the FTC still took enforcement action, emphasizing that user consent does not absolve a company of its responsibility to secure collected data. The case reinforced the FTC’s stance that data security failures constitute unfair business practices, even when users knowingly provide their information. The Retina-X shutdown serves as a strong warning to companies handling sensitive personal data—failure to implement adequate cybersecurity measures can result in severe regulatory penalties, regardless of whether the data collection itself was legally permitted.
Privacy Disclaimers Alone Are Not Enough
Even if a security company posts disclaimers, the FTC can still enforce action if data practices are:
- Unfair (collecting excessive or sensitive data).
- Deceptive (not clearly informing consumers).
- Insecure (failing to protect collected data from breaches).
Best Practices for Legal Compliance:✅ Clearly disclose what data is collected, how it’s used, and how long it’s stored.✅ Allow consumers to opt out of biometric or tracking-based surveillance.✅ Implement strong cybersecurity measures to prevent breaches.✅ Regularly audit third-party partners to ensure data security compliance.
Final Thought: The FTC has made it clear that privacy and security are interconnected—companies must not only be transparent but also ensure collected data is well-protected.
C. The Computer Fraud and Abuse Act (CFAA)
I included this statute because it is the law under which the government prosecutes hackers and sentences them to federal prison for hacking-related offenses. However, the Computer Fraud and Abuse Act (CFAA) is an outdated statute that has faced criticism for its broad language and lack of clarity. One of its key issues is that it does not clearly specify the required mental state (or intent) necessary to establish a violation, making legal interpretations more challenging. This lack of specificity has led to controversial prosecutions and ambiguous legal standards, sometimes resulting in overly harsh penalties for individuals engaging in activities that may not have been intended as malicious hacking.
CFAA and the Lack of a Specific Mental State Requirement
The Computer Fraud and Abuse Act (CFAA) (1986) is one of the most significant federal laws governing electronic data collection and cybersecurity. Originally enacted to combat hacking, the CFAA prohibits unauthorized access to computers and networks, but its broad and vague language has led to extensive litigation and debate over its interpretation.
Unlike many criminal statutes, the CFAA does not clearly define the required mental state (mens rea) necessary for a violation. This legal gap means that even unintentional access to a system could be considered a criminal offense if it is later deemed “unauthorized.” This has raised concerns that individuals who did not intend to engage in hacking—such as researchers, journalists, or employees accessing data in unclear circumstances—could still face prosecution under the law.
The ambiguity surrounding the CFAA has led to growing fears of overcriminalization, particularly in cases where individuals access publicly available or semi-public data in ways that companies or government agencies retroactively claim to be unauthorized. This issue has been at the center of landmark legal challenges, where courts have struggled to determine the line between legitimate data access and unlawful hacking. Critics argue that the CFAA, in its current form, is outdated and overly broad, leading to unfair prosecutions and chilling effects on cybersecurity research, digital journalism, and open-data initiatives.
There are cases where, if federal law does not explicitly specify a required mental state (mens rea), courts may interpret the statute as requiring a “knowingly” standard. This means that a defendant must have been aware of their actions and the nature of their conduct, even if they did not intend to violate the law.
In the context of the Computer Fraud and Abuse Act (CFAA) and similar statutes, courts have sometimes applied a “knowingly” or “intentionally” standard to determine whether an individual’s access to a system was unauthorized. However, the lack of a clear mens rea requirement in the CFAA has led to inconsistent interpretations across different cases and jurisdictions. This ambiguity has been a point of contention, as some prosecutions have targeted individuals who may not have realized their access was unauthorized or who acted under reasonable but mistaken assumptions about their permissions.
While courts can apply a default mental state in certain cases, the absence of an explicitly defined mens rea in federal law can still create legal uncertainty, particularly in statutes like the CFAA, where the scope of what constitutes unauthorized access is not clearly defined. This has contributed to concerns over broad enforcement, leading to debates over whether Congress should amend the law to clarify intent requirements and prevent overcriminalization.
Supreme Court Restrictions on CFAA Interpretation
Over time, the Supreme Court has limited the broad application of CFAA, particularly regarding what constitutes “unauthorized access.” Some key cases include:
Van Buren v. United States (2021)
- The Supreme Court ruled that CFAA does not apply to individuals who have legitimate access to a system but misuse that access.
- This case narrowed the interpretation of CFAA by clarifying that it only applies to users who exceed access privileges in a way explicitly prohibited by the system owner.
- Relevance to Event Surveillance: If security firms scan for Bluetooth or MAC addresses using tools that access publicly available signals, CFAA liability may not apply unless the tools intrude into secured systems without authorization.
I created a separate post about Van Buren v. United States, where I suggested that Congress should define the required mental state for prosecution under the Computer Fraud and Abuse Act (CFAA) as “intentional.” However, it is unlikely that Congress will ever do so because it would reduce their leverage in hacking cases. The CFAA is the primary statute under which most hackers are prosecuted. Given that hacking cases are inherently difficult to prosecute, adding “intent” as a separate element that prosecutors must prove would make securing convictions nearly impossible.
Under federal law, different types of mental states exist, primarily based on mens rea (the guilty mind). The most commonly recognized mental states include:
- Intentional – The defendant acted with a conscious objective to bring about a particular result.
- Knowing – The defendant was aware that their conduct would likely produce a particular outcome.
- Reckless – The defendant consciously disregarded a substantial and unjustifiable risk.
- Negligent – The defendant failed to be aware of a substantial risk that a reasonable person would have recognized.
In a criminal case, prosecutors must prove all elements of the offense, including the requisite mental state. If a penal statute does not explicitly define a mental state, courts typically infer one based on legislative intent. The Supreme Court has often interpreted silent statutes by applying Staples v. United States and Morissette v. United States, which presume that criminal laws require at least a knowing or reckless mental state unless Congress clearly indicates otherwise.
United States v. Nosal (9th Circuit, 2012, 2016)
- The 9th Circuit ruled that violating terms of service does not automatically constitute a CFAA violation.
- The court emphasized that “unauthorized access” must involve technical circumvention, not just policy violations.
- Relevance to Event Surveillance: A security company collecting MAC addresses from open networks would likely not violate CFAA, but if it bypasses security controls (e.g., hacking encrypted data), CFAA could apply.
Impact of CFAA on Event Surveillance
If a security company only passively detects device signals, such as identifying MAC addresses, Bluetooth connections, or Wi-Fi probe requests without actively interfering with or accessing the data being transmitted, the Computer Fraud and Abuse Act (CFAA) may not apply. Passive data collection typically involves listening for signals that devices naturally emit in public spaces, which, depending on the circumstances, may not constitute unauthorized access under the CFAA.
However, if the company actively bypasses security protections, such as decrypting encrypted communications, intercepting private messages, or hacking into personal devices, CFAA liability increases significantly. Engaging in such activities could be considered unauthorized access, particularly if the company exploits system vulnerabilities, circumvents password protections, or collects data that users have taken steps to secure. In these cases, federal authorities and courts are more likely to view the actions as violations of CFAA’s anti-hacking provisions.
Because the CFAA does not clearly specify a required intent (mens rea) for violations, even automated data collection could be legally risky if courts interpret it as unauthorized access. For example, if a security system automatically scans and logs network data without explicit user consent, a court could determine that the system has accessed data in a way that violates the law, even if no human actively initiated the process. This legal uncertainty makes it crucial for security firms and data collectors to implement strict compliance measures and seek legal guidance before deploying any technology that involves electronic data collection.
_____________________________________________________________________________________
We have now covered federal laws governing electronic data collection, so let’s shift our focus to state laws. While there is no single, comprehensive federal statute regulating data privacy and security across all industries, many states have enacted their own laws to fill in the gaps and provide additional protections for consumers.
Some states have particularly strong and relevant laws that impact data collection, privacy, and cybersecurity. These laws can vary significantly depending on the state, industry, and type of data being collected. For example, California has some of the most comprehensive consumer privacy laws in the country, including the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA), which give residents greater control over their personal data. Other states, such as Illinois, have passed specific laws like the Biometric Information Privacy Act (BIPA) to regulate the collection and storage of biometric data.
Because state laws often impose different requirements than federal laws, businesses and organizations that collect data must be aware of state-specific regulations that apply to their operations. In the next section, we will discuss key state laws, their enforcement mechanisms, and how they impact electronic data collection and privacy rights.
_____________________________________________________________________________________
2. State-Level Privacy Laws
The majority of states in the U.S. do not have specific laws regulating biometric data collection or cybersecurity practices. This gap exists because the law often lags behind technology, typically by about five years or more. As new technologies emerge, legislators struggle to keep pace, leading to a patchwork of state-level regulations rather than a unified national standard.
Given that the event in question may take place in California, it is essential to consider California’s strict privacy laws, particularly the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA). These laws impose significant obligations on businesses that collect personal data, including transparency requirements, consumer rights to opt out of data collection, and strict penalties for non-compliance. Because California has some of the strongest consumer privacy protections in the country, companies operating in the state—or handling data from California residents—must ensure full compliance to avoid legal risks.
Additionally, this discussion will include real-world examples of how some airports implement electronic surveillance, including the use of facial recognition technology, automated tracking systems, and biometric verification processes for passengers. The increasing reliance on biometric security measures raises critical privacy and legal concerns, especially in states that lack comprehensive regulations.
Since California, Texas, and Illinois have some of the most notable biometric and data privacy laws, we will explore their unique legal frameworks and enforcement mechanisms. While California leads the nation in consumer privacy rights, Texas has its own set of biometric data regulations, and Illinois’ Biometric Information Privacy Act (BIPA) remains the strictest biometric privacy law in the U.S., allowing individuals to sue companies for violations. Understanding these laws is crucial for businesses, event organizers, and security firms that handle electronic surveillance and personal data collection.
California Consumer Privacy Act (CCPA) & CPRA
The California Consumer Privacy Act (CCPA) and its expanded version, the California Privacy Rights Act (CPRA), establish some of the most comprehensive data privacy protections in the United States. These laws apply to any company that collects, processes, or sells personal data from California residents, regardless of where the company is based. This means that businesses outside of California may still be subject to these regulations if they interact with California consumers or track their data online.
Under these laws, California residents have the right to control how their personal data is collected, stored, and shared. One of the most important provisions is the right to opt out of data collection and the sale of their personal information. Businesses that collect consumer data must provide clear mechanisms for opting out, such as a “Do Not Sell My Personal Information” link on their websites. Additionally, consumers can request access to their data, ask for corrections, and demand that their personal information be deleted.
Businesses are also required to disclose exactly what data they collect, how it is used, and whether it is shared with third parties. This transparency requirement means that companies must provide detailed privacy policies outlining the types of information they collect—such as names, email addresses, IP addresses, geolocation data, and browsing history—and explain how that data is processed. If a company sells or shares consumer data, it must clearly state the categories of recipients and the purpose of the data transfer.
The CPRA, which took effect in 2023, strengthens the CCPA by introducing new consumer rights and stricter enforcement measures. It expands protections for sensitive personal information, such as biometric data, Social Security numbers, and financial details. The CPRA also establishes the California Privacy Protection Agency (CPPA), a regulatory body dedicated to enforcing data privacy laws and investigating violations.
Non-compliance with the CCPA and CPRA can lead to hefty fines and legal consequences. Businesses that fail to honor consumer rights, neglect to provide required disclosures, or experience preventable data breaches can face penalties ranging from $2,500 per violation (for unintentional breaches) to $7,500 per violation (for intentional violations or those involving minors’ data). Consumers may also have the right to file lawsuits in cases where their personal data is exposed due to a company’s failure to implement reasonable security measures.
For companies that collect data from California residents, compliance with CCPA and CPRA is not optional—it is a legal requirement. Businesses should implement strong data protection policies, maintain clear privacy disclosures, and provide consumers with accessible tools to exercise their rights. Failure to do so could result in significant financial penalties and reputational damage.
California does not have a separate biometric law like Illinois, but biometric data falls under the CCPA and CPRA, which give consumers:✔️ The right to opt out of biometric data collection.✔️ The right to know what data is collected and how it’s used.✔️ The right to request deletion of biometric data.
Example: Los Angeles International Airport (LAX, California)
- LAX uses facial recognition for international departures to speed up boarding.
- Under the CCPA, airlines and TSA must provide passengers with a notice explaining how their facial data will be used.
- Passengers have the right to refuse biometric scanning and opt for manual identity verification.
Legal Consequences:
- If a company fails to disclose biometric data collection, passengers can file complaints with the California Attorney General.
- Businesses that violate the law can face fines of up to $7,500 per violation.
Facial recognition technology is increasingly used in airports, shopping malls, and event venues for security and identity verification. However, the collection and storage of biometric data, such as facial scans, fingerprints, and iris scans, are subject to strict privacy laws in certain states, including Illinois, Texas, and California.
These laws require consent before collecting, storing, or sharing biometric data, and violations can result in lawsuits and regulatory penalties.
Illinois: The Biometric Information Privacy Act (BIPA)
Illinois’ Biometric Information Privacy Act (BIPA) is widely regarded as the strictest biometric privacy law in the United States. Enacted in 2008, BIPA imposes rigorous requirements on private entities that collect, store, or use biometric identifiers, including fingerprints, retina scans, voiceprints, and facial recognition data. The law was designed to protect individuals from unauthorized data collection, security breaches, and identity theft, particularly as biometric technology becomes more commonly used in workplaces, retail environments, and public spaces.
BIPA applies to any private company operating in Illinois that collects biometric data, regardless of where the company is headquartered. This means that businesses across the U.S. can be subject to BIPA’s requirements if they handle biometric data from Illinois residents. The law does not apply to government agencies, law enforcement, or financial institutions covered under other federal regulations.
Example: O’Hare International Airport (Chicago, IL)
- O’Hare International Airport has facial recognition kiosks for international travelers to verify their identity with U.S. Customs and Border Protection (CBP).
- Under BIPA, an airport security company or airline must obtain explicit written consent before collecting and storing biometric data.
- If an airline were to scan a passenger’s face without consent and store the data, it could face a BIPA lawsuit.
Legal Consequences:
- Companies must disclose how they collect, use, and store biometric data.
- Failure to comply can result in penalties of $1,000 per violation (or $5,000 per intentional violation), leading to class-action lawsuits.
- Example: In 2020, Clearview AI (a facial recognition company) was sued under BIPA for scraping biometric data without consent.
Texas: The Capture or Use of Biometric Identifier Act (CUBI Act)
Texas also regulates biometric data collection through the Capture or Use of Biometric Identifier Act (CUBI Act). While this law is not as strict as Illinois’ Biometric Information Privacy Act (BIPA), it still establishes important privacy protections for individuals whose biometric data is collected by businesses operating in Texas. The CUBI Act applies to private entities that collect, store, or use biometric identifiers, such as facial recognition data, fingerprints, retina scans, and voiceprints.
Under the CUBI Act, businesses must comply with several key requirements to ensure consumer privacy and data security:
First, businesses are required to obtain consent before collecting biometric information, such as facial recognition scans or fingerprints. This means that companies must inform individuals in advance that their biometric data is being collected, explain the purpose of collection, and obtain explicit consent before proceeding. Unlike BIPA, which requires written consent, the Texas law does not specify how consent must be obtained, making compliance slightly more flexible. However, failure to properly inform individuals can still result in legal liability.
Second, businesses must destroy biometric data within a reasonable time once it is no longer needed. The law requires that biometric identifiers be deleted within one year after the data is no longer necessary for the original purpose for which it was collected. This provision helps prevent indefinite storage of sensitive biometric information, reducing the risk of data breaches, unauthorized access, or misuse.
Third, the law prohibits businesses from selling, leasing, or disclosing biometric data to third parties without consent. This means companies cannot profit from biometric information by selling it to advertisers, data brokers, or other organizations without explicit authorization from the individual. However, the CUBI Act does allow sharing of biometric data under certain limited circumstances, such as when required by law enforcement or in cases where an individual voluntarily consents to a specific use.
While Texas’ CUBI Act lacks BIPA’s private right of action, meaning individuals cannot sue companies directly for violations, enforcement is handled by the Texas Attorney General. Businesses that fail to comply with the law can face civil penalties of up to $25,000 per violation.
Overall, while Texas’ biometric privacy law is less stringent than those in states like Illinois, it still places important obligations on businesses that collect and store biometric data. Companies operating in Texas should ensure that their biometric data policies align with state regulations, particularly regarding consent, data retention, and restrictions on sharing biometric information. As biometric technology continues to evolve, Texas may introduce further legal updates, making it essential for businesses to stay informed on privacy compliance requirements.
Example: Dallas/Fort Worth International Airport (DFW, Texas)
- DFW Airport has biometric boarding gates that use facial recognition to verify passenger identities.
- Under the CUBI Act, airlines or TSA must inform passengers that facial recognition is being used and allow them to opt out.
- Unlike BIPA, there is no private right of action—only the Texas Attorney General can sue for violations.
3. Cybersecurity Law is Highly Dynamic
One of the biggest challenges in this area is that cybersecurity and privacy laws are constantly evolving.
- Congress and state legislatures are actively proposing new regulations on biometric surveillance, data collection, and cybersecurity.
- Courts are regularly reevaluating the scope of laws like CFAA—as seen in Van Buren and Nosal, which significantly changed how unauthorized access is defined.
- International laws like the GDPR (General Data Protection Regulation) influence U.S. policies, and companies operating globally must comply with multiple overlapping legal frameworks.
- Emerging AI regulations may further impact how facial recognition can be used.
Why This Matters for Event Security
- What is legal today may not be legal tomorrow.
- Security companies must stay updated on privacy and cybersecurity laws to avoid liability.
- Relying solely on disclaimers is not enough—they must adapt their data collection policies to align with evolving legal standards.
4. Best Practices for Legal Compliance
If a company wants to implement electronic surveillance while remaining legally compliant, it must take proactive steps to ensure that its practices align with federal, state, and industry-specific regulations. Data collection and surveillance can pose significant legal and ethical risks, especially if they involve tracking consumer behavior, collecting personal identifiers, or monitoring communications. To minimize liability and ensure compliance, companies should follow these key best practices:
First, the company should obtain explicit consent from individuals before collecting their data, ideally through ticket agreements, terms of service, or clear disclaimers. Consent should be informed and voluntary, meaning that individuals understand what data is being collected, how it will be used, and whether it will be shared with third parties. Many states, such as California under the CCPA, require businesses to disclose their data collection practices upfront, so failing to obtain consent could result in legal penalties.
Second, the company should provide opt-out mechanisms that allow individuals to refuse data collection if they choose. This could include privacy settings in mobile applications, website cookie preferences, or physical opt-out options at events. Giving users control over their data is not only a legal safeguard but also enhances consumer trust and reduces the risk of regulatory scrutiny.
Third, limiting data retention is critical to protecting privacy and reducing liability. The company should only store personal data for as long as necessary to fulfill its stated purpose. Retaining data indefinitely or failing to delete it after its intended use can expose the company to data breaches, lawsuits, and regulatory investigations. A strong data retention policy should specify how long different types of data are stored, when they are deleted, and how they are securely disposed of.
Additionally, companies must regularly review evolving privacy and cybersecurity regulations to stay compliant with new legal requirements. Privacy laws are constantly changing, with states passing new consumer protection statutes and regulatory agencies updating enforcement guidelines. Keeping up to date with these changes ensures that the company does not inadvertently violate new laws.
Finally, consulting with legal counsel is essential for ensuring full compliance with data privacy laws. A qualified attorney specializing in privacy, cybersecurity, or consumer protection can help draft clear policies, review compliance risks, and provide legal guidance on surveillance practices. Given the complexity of data privacy laws, seeking legal advice helps businesses navigate potential pitfalls and avoid expensive penalties or litigation.
By implementing these best practices, a company can use electronic surveillance legally, protect consumer rights, and reduce the risk of regulatory enforcement or lawsuits.
Corporations Break the Law Frequently
Corporations often engage in illegal practices, particularly when it comes to electronic surveillance. While laws exist to protect consumer privacy, many companies knowingly violate these regulations because the consequences—typically small fines—are minimal compared to the profits they generate from data collection.
How Companies Conduct Electronic Surveillance
There are two primary ways corporations engage in electronic surveillance:
Without Consent – Some corporations collect data without notifying or obtaining permission from consumers. This method carries significant legal risks, as it directly violates privacy laws. However, enforcement is often weak, and penalties are not severe enough to serve as an effective deterrent.
With Consent – Many companies attempt to shield themselves from liability by obtaining user consent through lengthy, complex agreements. Consumers “agree” to electronic surveillance when they sign contracts, connect devices to the internet, or use certain services. However, even with consent, corporations are still at risk of lawsuits. Privacy laws are complex and evolving, and courts have ruled against companies like car manufacturers and airports despite their efforts to secure legal protection.
While corporations continue to exploit legal loopholes, growing public awareness and regulatory action could lead to stricter enforcement and greater accountability.
Conclusion
Security technology is advancing rapidly, and event organizers must balance security with privacy rights.
If a security company wants to collect Bluetooth, MAC addresses, and facial recognition data, compliance with federal and state laws is critical. With cybersecurity laws constantly changing, businesses must stay informed to avoid liability and legal challenges.