DAAA Frequently Asked Questions

2024

Digital Age Assurance Act (DAAA): Frequently Asked Questions Last Updated: October 25, 2024 Table of Contents:

  1. What is device-based age verification?
  2. Who is required to comply under the Digital Age Assurance Act?
  3. What are the proposed enforcement mechanisms?
  4. Why is U.S.C. 2256(2)(A) referenced?
  5. Is device-based age verification considered Constitutional?
  6. There are age verification bills already, why aren't we just relying on those?
  7. How does device-based age verification impact current social media bills?
  8. What types of devices does the Digital Age Assurance Act apply to, and how would this work for laptops?
  9. Does the Digital Age Assurance Act offer protections on the app store level?
  10. Why is a "commercially reasonable" method of age verification so broad?
  11. What would be the fiscal impact of a proposed bill?
  12. What if an individual does not have the proper documentation to verify their age?
  13. What happens if a user does not want to verify their age upon device activation?
  14. If the device is indicated as a child device, how is an application store expected to receive verifiable parental consent?
  1. What is device-based age verification?
    1. Device-based age verification is an age verification method in which a user verifies their age once through their device's operating system. The user's age is then securely stored on the user's device within the operating system.
      • A user would verify their age at the time of device activation, or through Operating System (OS) updates for devices sold prior to the effective date.
      • When a user attempts to access a website, application, application store, or online service that requires age verification, the user's operating system would share a user's verification status with the applicable website, application, application store, or online service through a secure application programming interface (API). The API would then provide the website, application, application store, or online service with a response on whether the user meets the defined age requirement thresholds to access the platform.
  2. Who is required to comply under the Digital Age Assurance Act?
    1. Under the proposed Digital Age Assurance Act, Covered Manufacturers, websites, applications, and online services are required to comply.
      • Covered Manufacturers are defined as the manufacturer of a device, an operating system for a device, or an application store, and are required to take commercially reasonable and technically feasible steps to determine or estimate the age of the device's primary user. Covered Manufacturers are also required to provide websites, applications, application stores, and online services with a digital signal regarding the device user's age threshold via a real-time application programming interface ("API").
        • If the Covered Manufacturer is an application store, the application store must obtain parental or guardian consent prior to permitting an individual under sixteen years of age to download an application from the application store.
        • The application store must also provide the parent or guardian with the option to connect with the developer of an application for the purpose of facilitating parental supervision tools.
      • Websites, applications, or online services that make available mature content are required to recognize and allow for the receipt of digital age signals.
        • Websites, applications, or online services that make available a substantial portion of mature content are required to block access to individuals indicated as under eighteen years of age.
        • Websites, applications, or online services that knowingly make available less than a substantial portion of mature content are required to block access to known mature content to individuals indicated as under eighteen years of age.
      • Application stores are required to recognize and allow for the receipt of digital age signals to determine whether an individual is under sixteen years of age, and obtain parental or guardian consent as described in the above Covered Manufacturer requirements.
  3. What are the proposed enforcement mechanisms?
    1. It's important to understand that the proposed Digital Age Assurance Act language may differ state-by-state, and as such, enforcement actions should be determined within each jurisdiction. The enforcement mechanisms outlined within the Digital Age Assurance Act include the following:
      • If the state believes an entity is in violation of the age verification requirements, including Covered Manufacturers, websites, applications, and online services, the state may provide an entity with written notice of specific violations. If the entity does not respond to the state or continues to violate the act in breach of an express written statement, the state may bring an action and seek damages against the entity.
        • Within the action, the state may collect a civil penalty of up to $10,000 per violation.
        • The state may alternatively seek damages for up to $2,500 per each minor actually harmed in violation of the act.
      • Covered Manufacturers that have taken commercially reasonable and technically feasible steps to determine or estimate the age of the device's user, are not subject to liability.
  4. Why is U.S.C. 2256(2)(A) referenced? Within the Digital Age Assurance Act, "mature content" is defined as U.S. Code 2256(2)(A), which is the federal code definition for "sexually explicit conduct" and is defined as:
    1. Except as provided in subparagraph (B), "sexually explicit conduct" means actual or simulated -
      • Sexual intercourse, including genital-genital, oral-genital, anal-genital, or oral-anal, whether between persons of the same or opposite sex;
      • Bestiality;
      • Masturbation;
      • Sadistic or masochistic abuse; or
      • Lascivious exhibition of the anus, genitals, or pubic area of any person.
    2. This code was specifically included within this legislation to provide a standard definition that can be consistently referenced across various states. Additionally, U.S.C. 2256(2)(A) is inclusive and is non-descriptive towards an individual or group's sexual orientation or identity.
  5. Is device-based age verification considered Constitutional?
    1. Unlike alternative age verification methods, device-based age verification is considered Constitutional because it neither burdens users nor requires the disclosure of identity or other personal information in order to access mature content, and no Constitutionally protected speech is impeded.
    2. For a full analysis on the Constitutionality of device-based age verification, please refer to the Digital Age Assurance Act Constitutional Analysis.
  6. There are age verification bills already, why aren't we just relying on those?
    1. In the United States, there are currently 19 bills that have been passed requiring age verification for mature content. States with laws currently in effect include: Arkansas, Idaho, Indiana, Kansas, Kentucky, Louisiana, Montana, Mississippi, Nebraska, North Carolina, Texas, Utah, Virginia, and Alabama.
    2. The existing age verification laws pose significant concerns to user data privacy and security, present Constitutional challenges to First Amendment free speech rights, dissuade proper platform compliance and enforcement, and can lead to negative outcomes from unintended user behavior.
    3. Age verification solutions must be properly implemented to achieve the primary goal of protecting children while simultaneously preserving the privacy and Constitutional rights of adult consumers.
    4. As such, there is a significant need for legislation that places the age assurance mandate at the source, on the device, to resolve these challenges. Device-based technology as a method of age verification is technically feasible to implement and relies on elements that are already common practice across industries. Device-based age assurance verifies a user's age through their device's operating system and shares the user's verified age or age range with the application, service, or website the user is attempting to reach.
    5. By standardizing age verification on the device-level, users would no longer be required to provide personal information or PII numerous times across multiple platforms, significantly reducing the risk of misuse, data breaches, and overall concerns for data privacy. In turn, device-based age verification encourages compliance across websites, applications, and online services and significantly reduces opportunities for bypass, eliminating circumstances in which a minor circumvents the anticipated protections by finding ways to access adult content through simple actions like using a basic VPN service, or by accessing non-compliant sites. This creates a safer, privacy preserving approach to age verification that achieves its central goal in protecting children, while also addressing the various Constitutional and compliance challenges posed by existing iterations of current legislation.
  7. How does device-based age verification impact current social media bills?
    1. Existing social media bills are separate from existing age verification bills for adult content; however, both contain a requirement for entities to verify the age of the user attempting to access their website or service.
    2. The verification mechanism proposed by device-based age verification can be leveraged by both social media companies and adult content websites in order to fulfill applicable age verification requirements. Rather than requiring a user to verify their age on a per-platform basis, device-based age verification centralizes the user's point of verification. This allows users to reduce the amount of personal information provided across platforms, including both social media and adult content websites.
  8. What types of devices does the Digital Age Assurance Act apply to, and how would this work for laptops?
    1. The Digital Age Assurance Act would apply to all devices that are designed for and capable of communicating across a computer network for the purpose of transmitting, receiving, or storing data, including, but not limited to, a desktop, laptop, cellular telephone, tablet, or other device designed for such purpose.
    2. Desktop computers and laptops run on an operating system, such as Google (Android), Apple (macOS/iOS), or Microsoft (Windows), and device-based age verification can be implemented through the operating system at the profile level set-up. This allows for device-based age verification to work with the integrated security processes in place for accessing desktop and laptop computers, including shared devices with multiple accounts or profiles.
  9. Does the Digital Age Assurance Act offer protections on the app store level?
    1. The Digital Age Assurance Act requires application stores to receive the digital age signal regarding whether an individual is under the age of thirteen, at least thirteen years of age and under sixteen years of age, at least sixteen years of age and under eighteen years of age, or at least eighteen years of age. The application store would receive the digital age signal from the device's operating via the same real-time API integration required for websites, applications or online services.
    2. If the device's user is under sixteen years of age, application stores would be required to obtain parental or guardian consent prior to permitting an individual to download an application from the application store. Additionally, application stores would be required to provide the parent or guardian with the option to connect to the developer of the downloaded application for the purpose of facilitating parental supervision tools.
  10. Why is a "commercially reasonable" method of age verification so broad?
    1. Covered manufacturers are required to take commercially reasonable and technically feasible steps to determine or estimate the age of the device's user upon device activation.
    2. Given the ever-evolving state of technology, providing the option for a "commercially reasonable method" of age verification allows Covered Manufacturers to deploy verification methods in line with their existing processes and technology. Many major device manufacturers, such as Google and Apple, have already integrated age verification processes in their standard device setup practices. Rather than outlining stringent, narrowly defined requirements for age verification, "commercially reasonable" methods allow the trusted handful of Covered Manufacturers to implement processes conducive to their existing system, and therefore allows for the seamless implementation of device-based age verification.
  11. What would be the fiscal impact of a proposed bill?
    1. The Digital Age Assurance Act would allow the enforcing body to seek civil damages and issue fines for Covered Manufacturers or websites, applications, application stores, or online services found in violation. These fines may total up to $10,000 per entity found in violation of this act, or up to $2,500 per each minor harmed in violation of the act.
    2. Enforcement costs are anticipated to be very low as investigation could be conducted through API implementation check, which would include scanning websites, applications, application stores, or online services on a mass scale for the appropriate API integration. Violations identified during the investigations would largely fund any associated costs of enforcement. Note the fiscal impact of a proposed device-based age assurance bill may vary across states, depending on how the enforcing agency implements the existing violations.
  12. What if an individual does not have the proper documentation to verify their age?
    1. As with all legislation, accommodations are made for exceptions (limitations for the undocumented also exist with platform-level age verification). The commercially reasonable method of age verification should allow Covered Manufacturers to determine different methods of verification, from government-issued identification cards to age estimation technology. At present, these are potential options for age verification:
      • Documentation: If the Covered Manufacturer allows for government-issued documentation, it can be any government document, including a foreign ID.
      • Technology: If the individual attempting to verify does not have any government-issued identification, a Covered Manufacturer could employ existing technologies, such as age estimation technology.
    2. It's important to note that age verification is not identity verification, and with existing technology capabilities it is possible for Covered Manufacturers to tie an age to an individual's account without knowing the identity of an individual.
  13. What happens if a user does not want to verify their age upon device activation? The Digital Age Assurance Act would require all users to verify their age upon device activation, or during OS updates for devices sold prior to the effective date. If a user's age is verified as under eighteen or another applicable age threshold, the user would not be permitted to access websites, applications, or online services that require age verification.
  14. If the device is indicated as a child device, how is an application store expected to receive verifiable parental consent?
    1. Application stores would be required to implement a reasonably designed, technically feasible method for receiving verifiable parental consent, similar to how manufacturers of devices or operating systems are required to take commercially reasonable and technically feasible steps to verify a user's age.
    2. The Digital Age Assurance Act does not prescribe a specific method for receiving parental consent so that application stores may design and deploy a mechanism that is best suited to their existing technology. Note that Covered Manufacturers or websites, applications, or online services will still be required to comply with separate, federal regulations regarding parental consent not associated with the Digital Age Assurance Act.
    3. Several existing U.S. federal privacy laws suggest acceptable methods to obtain verifiable parental consent, including:
      • Having the parent provide a copy of a government-issued ID that can be checked against a database.
      • Having the parent use a credit or debit card to make a transaction, which notifies the account holder.
      • Having the parent answer a series of knowledge-based questions.
      • Having the parent call a toll-free number staffed by trained personnel.
 Read More

The First Line of Defense in Online Child Protection: A Model Framework for Employers of Content Moderators

2024

Read More

DAAA Constitutional Analysis

2024

THE DIGITAL AGE ASSURANCE ACT PASSES CONSTITUTIONAL MUSTER  The Digital Age Assurance Act of 2024 (the “Digital Age Assurance Act”) rests on firm  constitutional grounds. It would require adult-content websites and related providers to utilize  existing technology to curb minors’ access to sexually explicit, adults-only online content.  Because age verification can occur automatically whenever a device attempts to gain access to an  adult-only site, and neither burdens users nor requires disclosure of identity or any other personal information, it avoids the constitutional concerns that have been raised against prior age verification schemes or other measures to curb minors’ access to online adult content. Users do not  have to reveal any personal information or take any affirmative action. Operating systems already  have the Application Programming Interfaces (“APIs”) and software capable of providing owners’  age-status whenever a website asks for age verification. The bill would not materially burden users,  content providers, or even operating system manufacturers.   From a constitutional perspective, the Digital Age Assurance Act falls well within  established constitutional limits. No constitutionally protected speech is impeded, none is chilled.  The Digital Age Assurance Act even addresses the concerns raised in a pending Supreme Court  case (Paxton v. Free Speech Coalition) addressing much more intrusive age-verification laws in  19 states. If enacted, it would pass constitutional muster under existing Supreme Court holdings  (discussed below), such as United States v. Playboy Ent’t Grp., Inc. This is a completely safe, legal,  and appropriate way to prevent minors’ access to online adult content.   BACKGROUND  The Digital Age Assurance Act is quite simple. Device-based age assurance verifies a  user’s age through their device’s operating system and shares the user’s verified age or age range  with the application, service, or website the user is attempting to visit. Existing features on all  operating systems, such as Apple’s Wallet®, can provide the device-owner’s age without  disclosing any personal information. Privacy is fully protected: the only information relayed is  age-range, which is not constitutionally protected when adult-only content is the subject.   Only three provisions of the Digital Age Assurance Act are constitutionally significant here: it requires (1) manufacturers to take commercially reasonable and technically feasible steps  to establish the age of the device’s owner or user; (2) manufacturers to provide websites,  applications, and online services with a signal indicating whether the individual is under age 13,  between 13 and 15, between 16 and 18, or 18 or older; and, (3) websites, applications, and online  services with statutorily defined “mature content” to recognize these age signals and to block  access if the individual is underaged. None of these requirements raises a constitutional red flag.   ANALYSIS  1. The Digital Age Assurance Act Meets All Constitutional Tests for Age Verification Mandates. States have broad authority to enact laws restricting minors’ access to material that is  “obscene as to youths,” Erznoznik v. City of Jacksonville, 422 U.S. 205, 213 (1975), recognizing  that what is acceptable for adults may be inappropriate for children. See Ginsberg v. New York,  390 U.S. 629, 634-35 (1968) (holding that a state law requiring distributers to verify customers’  age when selling pornographic material did not violate First Amendment so long as a rational basis existed for finding that the material was harmful to children); Reno v. ACLU, 521 U.S. 844 875  (1997) (government’s valid “interest in protecting children from harmful materials …. does not  justify an unnecessarily broad suppression of speech addressed to adults”). Minors have no First  Amendment right of access to the “mature” content subject to the Digital Age Assurance Act.  Indeed, the Digital Age Assurance Act presumes that the content providers are required by various  federal and state laws to prohibit minors’ access to mature, adults-only content. The rights at issue  belong to device owners, manufacturers, websites, applications, online services, and content  providers, and none is infringed by the Digital Age Assurance Act.  

  • No interference with adults’ access to mature content. Adults would maintain the exact same access to adult content that they presently have. Contrast with United States v. Playboy Ent’t Grp., Inc., 529 U.S. 803, 813 (2000) (finding that federal law restricting hours when sexually explicit television programming could be shown infringes adult viewers’ First Amendment rights of access to non-obscene communication). 
  • No chilling effect because no burden on users. The principal objection to age verification statutes is the burden placed on adults when accessing mature-content sites or services: a requirement to provide government-issued or comparable identification, which, due to privacy concerns, chills their access and implicates First Amendment concerns. Age verification has no chilling effect. Owners and users are not affected because they are not required to act. The only information revealed is the user/owner’s age status. 
  • No invasion of user privacy. Because age is the only information transmitted by the signal, the user/owner does not reveal any personal information. There is no risk of disclosure from hacks or other intrusions. 
  • No undue burden on manufacturers. The technology to implement the Digital Age Assurance Act already exists. Requiring an operating system sold or used in a state to transmit the owner’s age-range based upon the device’s registration data is a minimal burden that neither interferes with interstate commerce (i.e., it does not violate the dormant commerce clause despite potentially broad extraterritorial effect outside of the enacting state), (see Nat’l Pork Producers Council v. Ross, 598 U.S. 356 (2023) (upholding California statute requiring all pork sold in state to meet animal-welfare requirements)), nor violates due process, (see Exxon Corp. v. Maryland, 437 U.S. 117,  125 (1978) (holding that state law regulating in-state economic conduct of out-of-state  companies does not violate substantive due process when it is rationally related to  legitimate state purpose)).  
  • No burden or chilling effect on content providers. Because the Digital Age Assurance Act merely requires content providers to accept and utilize the age verification data that would be provided automatically when a device seeks access to a site with mature content, it does not burden or deter their First Amendment rights to free speech and expression. Requiring an adult site to recognize a signal indicating that the user is a minor is no different than requiring a vendor in the non-digital world to ascertain a customer’s adult status prior to selling adult-only products. 
  • No less restrictive alternatives. Even if the Digital Age Assurance Act burdens adults’ right to access constitutionally protected content—and it does not—it still would pass any possible requirement1of “strict scrutiny” under the First Amendment, which requires the state to show that the statutory scheme is narrowly tailored such that no less restrictive alternative is available. See Playboy, 529 U.S. at 813. Here, no less intrusive means of age verification is even possible. All other means of restricting minors’ access (such as screening software or requiring actual proof of age status) have much greater effect and burden on adult owners/users. Indeed, by avoiding these more intrusive alternatives, the Digital Age Assurance Act enhances adults’ access. 
The Digital Age Assurance Act thus readily satisfies all applicable constitutional tests.  2. The Digital Age Assurance Act Avoids the Constitutional Issues Raised in Paxton. The constitutionally benign character of the Digital Age Assurance Act is clear in contrast to the age verification statutes at issue in Paxton and lower-court decisions addressing their  potential unconstitutionality. Those laws require online adult-content users to identify themselves  by government-issued identification, facial recognition technology, or other commercially  reasonable means to establish their age before accessing the content. In Free Speech Coal., Inc. v.  Paxton, 95 F.4th 263 (5th Cir. 2024), a divided Fifth Circuit panel declined to enjoin preliminarily a Texas age verification statute, ruling that it was no more intrusive than the verification  requirement applied to direct physical access in Ginsberg and thus passed constitutional muster  under Ginsberg’s minimum-scrutiny test. Id. at 269-71. The U.S. Supreme Court has granted a writ  of certiorari to review the Fifth Circuit’s decision. None of the concerns raised by the Paxton dissent, the Supreme Court petitioners, or critics  of the Fifth Circuit decision applies to the requirements set forth in the Digital Age Assurance Act.  Indeed, device-based age verification is exactly the type of benign measure that First Amendment  advocates have been insisting that states pursue in lieu of more intrusive or burdensome means.   First and foremost, no speech for adults is regulated by the Digital Age Assurance Act. By  contrast, the Texas age-verification statute in Paxton “necessarily encompasses non-obscene,  sexually expressive—and constitutionally protected—speech for adults.” Paxton, 95 F.3d at __  (Higginbotham, J., dissenting). Because no adult speech is regulated, the statute is assessed under  minimal scrutiny (rational basis), and, since Ginsberg, age verification to restrict minors’ access  to sexually explicit material is a universally accepted legislative objective.   Second, as shown above, the Digital Age Assurance Act passes strict scrutiny. Its age  verification mechanism is the least restrictive alternative because it is narrowly tailored to protect  user anonymity without impeding adult users’ access. Indeed, age verification has long been cited  as the line separating permissible age verification in the physical world (adult purchasers are  visually age verified in most cases) from the means attempted in prior online access cases.2 Thus,  the principal issue under review by the Supreme Court—whether strict scrutiny or minimum  scrutiny applies to the Texas statute—is immaterial here.   Third, because anonymity and privacy are completely protected, the Digital Age Assurance  Act poses no chilling effect on access to protected material. Contrast with Paxton, 95 F.3d at 303  (Higginbotham, J., dissenting) (“the age verification mandate will chill protected speech”).   Finally, the lack of any tangible burden on manufacturers, websites, content providers, or  users stands in stark contrast with the Texas statute under review in Paxton. There, users must  authenticate their age before accessing the materials; providers must warn of risks from viewing  the material; and users must view the warnings before gaining access. The Digital Age Assurance  Act’s age verification process seamlessly avoids any such barriers between user and content  provider.   In sum, the Digital Age Assurance Act deftly avoids all of the constitutional concerns  raised in Paxton. Read More

DAAA Technical Whitepaper

2024

Device-Based Age Assurance:  A Safer Approach to Ensuring Access to Age-Appropriate Content Last Updated Date: July 31, 2024  Executive Summary  Age verification has become a priority for lawmakers in their efforts to make the internet a safer  space. Unfortunately, current efforts to enact age verification laws to prevent minors from  accessing adult content pose significant data privacy and security concerns, present  Constitutional challenges to First Amendment free speech rights, increase barriers to proper  platform compliance and enforcement by dissuading user retention, and can lead to negative  outcomes from unintended user behavior. Age verification solutions must be properly  implemented to achieve the primary goal of protecting children, while simultaneously preserving  the privacy and Constitutional rights of adult consumers.   As such, there is a significant need for legislation that places the age assurance mandate at the  source, on the device, to resolve these challenges. Device-based technology as a method of age  assurance is technically feasible to implement and relies on elements that are already a  common practice across industries. Device-based age assurance verifies a user’s age through  their device’s operating system and shares the user’s verified age or age range with the  application, service, or website the user is attempting to reach, creating a safer, privacy  preserving approach to age verification, while also addressing the various Constitutional and  compliance challenges posed by existing iterations of current legislation.  Implications of Current Online Age Verification Laws   The current age verification requirements under recent legislation are ineffective and pose  significant implications to the privacy and Constitutional rights of adults. In June 2022,  Louisiana passed an age verification law requiring platforms and websites that contain “a  substantial amount of adult material” to implement an age verification method prior to granting  users access to the website’s content. Since then, eighteen (18) additional states have followed  suit. Requirements vary largely across states for what constitutes a reasonable age verification  method, ranging from highly invasive methods such as uploading a government-issued  identification (ID) card, to vague methods so long as they are “commercially reasonable.”   While these age verification laws may be well-intentioned in protecting minors from accessing  age-inappropriate content, they fail to do so. Additionally, they aim to subject adult users to  upload personal information and sensitive data prior to accessing content, posing adverse  consequences to user privacy and constitutionally protected speech. As current legislation requires verification to occur on a per platform basis, adult users are required to upload or  provide personal information numerous times across multiple platforms, significantly  increasing the risk of misuse and phishing, to their information being compromised in data  breaches, and potential widespread identity theft. Additionally, existing age verification  mandates burden adult users’ access to Constitutionally protected speech. Existing age  verification laws, including Texas’ H.B.1181, are actively undergoing challenges in district and  appellate courts as well as the Supreme Court for harming the speech rights of adults by  creating a government mandated, restrictive barrier to access.  Though some companies with a genuine interest in protecting children and the privacy of adult  users may comply with these regulations and take users’ safety and privacy into account, many  companies and sites may not have the resources or desire to comply in a comprehensive  manner. This results in a patchwork approach to compliance with age verification laws – each  of the potentially hundreds of thousands of platforms may have their own systems or third-party  vendors with a high degree of variance on how securely they store information, how much due  diligence they have for third-party vendors, and how strongly they or their third parties uphold  data deletion policies.   Additionally, existing age verification laws dissuade compliance. Compliant sites that  implement proper verification protocols have experienced a significant exodus of users since adult users that do not want to share personal information will seek out non-compliant sites,  many of which are located outside the jurisdictions of the states. This has the effect of naturally  redistributing users to non-compliant platforms and websites. Many smaller platforms and  websites who are not compliant continue operating without effective processes to verify the  age of users, or without proper safeguards in place to protect the personal information  collected from users. In the end, the goal of protecting minors online falls woefully short.  What is Device-Based Age Assurance  The most effective, secure, and equitable solution for protecting all users, both minors and  adults alike, is to implement a mechanism that verifies a user’s age only once and at the point  of access to the internet: on the device. The user’s age or age range can be shared with the  application, online service, or website they are attempting to reach. This approach, otherwise  known as device-based age assurance, would require a user’s age to be independently verified  one time by the device’s operating system, and would securely store the user’s age locally on  the individual device.   When a user attempts to access a website containing adult content, the user’s operating  system would then share a user’s verification status with the applicable website through a secure Application Programming Interface (API), which would provide the website with a  response on whether the user meets the defined age thresholds to access the platform. This  approach ensures a seamless experience between the user and the platform that user is trying  to access, eliminating the need to upload personal information to a third-party verification  system or to each adult content platform visited, removing the barriers to access  Constitutionally protected speech.   Device-based age assurance is straightforward and effective. The technology already exists and  standardizes the age verification process, reducing potential points of failure including privacy,  Constitutional, and compliance concerns with existing age verification laws. The crux of the  approach requires collaboration with operating system companies, such as Apple (iOS), Google  (Android), and Microsoft (Windows), to leverage existing infrastructure and technology to  deploy a secure method to validate and store a user’s age, and create a secure API in which a  user’s age or age range can be shared with the adult website in an anonymized and secure  manner.   Technical Feasibility of Device-Based Age Assurance  Current hardware and software systems are already beyond the maturity-level required to  deploy a device-based age assurance solution. Apple, Inc., one of the leading operating systems  and technology companies, can be examined as a case study to demonstrate the existing  technological feasibility of device-based age assurance. More recently, Google has deployed  age assurance functionality in the United Kingdom.  Secure data stored by device manufacturers and operating systems can be accessed through  readily available, trusted, and developed APIs. An API is a set of protocols that allow software  programs to communicate and access specific data points from other operating systems,  applications, or services. This API integration provides websites with the functionality to request  information, including age information, directly from the device’s operating system without  requiring the website to authenticate the personal information of the user. This allows platforms  and websites to request and access data stored within the device without needing to directly  interact with the backend architecture of the device’s operating system.  As an example, Apple already maintains a Wallet API that is capable of the functionalities  required for device-based age assurance. The Verify by Wallet is an example of an API that  allows integrations that share verified, authenticated age information to approved third party  applications. The data shared is limited only to the integration’s use case, ensuring the privacy  of the device user. This prevents device manufacturers from oversharing user data beyond the  approved use case and allows websites to minimize the amount of data they collect. Though the method in which device manufacturers and operating systems may perform and collect age verification information can vary, the Verify by Wallet API demonstrates an example of the  existing technology entities have in place to share limited personal data with third parties.  A Potential Implementation of Device-Based Age Assurance  By leveraging the above technology to securely store and share verified age information through  APIs, it is possible to implement a highly effective device-based age assurance mechanism that  addresses the unintended privacy risks of current, mandated platform-level age verification  requirements. The user, device/operating system, and platforms/websites are all able to safely  interact, verify ages using privacy-preserving approaches, and protect minors from accessing  age-inappropriate content.   Step 1: Age Verification of the User  Upon activation of a device, a user will validate their age through commercially reasonable  methods put into place by the operating system, such as inputting the required information on  the local device.   Once the age information is verified, it can be stored locally on the device or by other secure  methods implemented by the operating system. Storage on the device can be done so securely,  similarly to how government-issued IDs are currently stored on devices.  Step 2: Websites Requiring Age Verification Must Implement Sufficient API Integrations Any website that is legally required to verify the ages of their users must implement a sufficient  API integration with operating systems. The API integration must be reviewed and approved by  the operating system before the site can request and receive any age data.   APIs are a common practice and already exist on devices, major operating systems, websites,  and applications. Websites and applications use APIs on a daily basis in order to communicate  with other services without needing to access the other’s codebase or backend architecture.  Each API integration use case is tracked by the operating systems as standard procedure to  ensure the traceability and accountability of websites using these APIs. Websites are required  to provide full transparency into the identity information the app requests.   Step 3: User Attempts to Access Age Restricted Site  When a user attempts to visit a site using such an API, the site will send a request for the age  verified data through the approved API. The API then receives and processes the request. Based  on the agreed upon terms of the API integration, such as the use case for this information and  the age data needed, the API will retrieve the necessary information from the operating system.  The operating system could provide either the exact age of the user, or provide signals based on  the legal thresholds (<13, <16, 18+, 18-) defined within the state. The device will then provide the  verified age data to the site.   After obtaining age data, the site can then allow access or display permitted content to the user  as per the site’s age restriction policies. If a site is properly compliant, the user will not be able  to access the site if the user’s age is determined to be below the threshold to access. Effective, Secure, and Equitable Age Assurance  Device-based age assurance is an effective, easy to implement, and technically feasible solution  for preventing minors from accessing age-inappropriate material while protecting the privacy  and Constitutional rights of adult users. By verifying a user's age through the device’s operating  system and securely sharing through an API to approved websites and platforms, device-based  age assurance mitigates the inherent privacy risks, Constitutional challenges, and patchwork  nature of compliance currently posed by existing age verification laws. In addition, a device based age assurance mechanism does not dissuade users from visiting compliant platforms  and websites. Compliance with device-based age assurance would be considered better for  business, reducing the number of non-compliant websites and therefore the opportunities for  minors to access age-inappropriate content. Overall, the common goal of protecting minors  online would be achieved.  Device-based age assurance is technically feasible to implement and can be securely leveraged  across all platforms, apps, and websites. As demonstrated by Apple, one of the three major  operating system companies, the innovations and technologies required to implement device based age assurance are already widely in use and could be easily updated to enable this  assurance mechanism globally within a short time horizon. Users would only need to validate  and share their personal information with their operating system, which many users already trust with a high level of privacy and security. Device-based age assurance creates a simpler,  more transparent and secure ecosystem for all parties, and fulfills its main purpose of  protecting minors from accessing inappropriate content online.  ///Read More

ICMEC Publications

Part I – A statistical analysis of applications made in 2021 under the Hague Convention of 25 October 1980 on the Civil Aspects of International Child Abduction: Global Report

2024

(HCCH) This is the fifth statistical study to look into the operation of the Hague Convention of 25 October 1980 on the Civil Aspects of International Child Abduction. This study concerns all applications received by Central Authorities in 2021. Previous Studies analysed data from applications made in 2015 (Fourth Study), 2008 (Third Study), 2003 (Second Study) and 1999 (First Study).  Read More

FOUND: A Story of Hope

2021

Elieth Samara was abducted when she was just 19 days old. For 33 long hours, her mother Nayelly lived a nightmare until she was reunited with her daughter thanks to the tireless investigation of ICMEC and our allies around the globe. Watch this powerful story of heartbreak and hope. Once Elieth Samara was reported missing, ICMEC served as the on-the-ground advisor in the investigation. For 33 hours, ICMEC coordinated with dozens of partners from the U.S. Embassy, the Civil National Police, the Public Ministry in Guatemala, Alba-Keneth Alerts, and Facebook. These front-line child protection heroes acted swiftly and in coordination thanks to the training, technology and inter-agency collaboration that has been fostered for more than a decade.  While Elieth has been safely reunited with her mother, our work is not done. More than a million children like her are reported missing each year around the world – and many more are victims of abuse and exploitation. That’s why ICMEC remains dedicated to continuing our work, because one child missing, abused, or exploited is one too many.Read More

Child Protection, ICMEC Publications, International Schools, Parents, Mandatory Reporters, Schools, Teachers, Training

Mandatory Reporter Infographic for Youth Serving Professionals

2020

(ICMEC) Your Role as a Mandatory Reporter - published by ICMEC to support youth serving professionals in identification of signs and indicators, grooming behavior and trauma informed response to child disclosure.  Additional support for training on identifying grooming is available in these additional resources.Read More

Child Abuse/Exploitation, Child Protection, Peer-Peer Abuse, Policies and Procedures, Schools, Teachers

School Based Violence Prevention Handbook

2020

WHO - This handbook addresses the key elements of violence or abuse prevention in schools. It provides guidance for school officials and education authorities on how schools can embed violence prevention within their routine activities and across the points of interaction schools provide with children, parents and other community members. If implemented, the handbook will contribute much to helping achieve the SDGs and other global health and development goals. Produced by the World Health Organization in collaboration with UNESCO and UNICEF.  Available in English, Spanish and French.Read More

Safety Precautions for Teachers and Students During COVID

2020

ISPCAN - Infographic to support healthy return to in person school. Read More

Child Sexual Abuse in Sports

2020

(IICSA) - Findings of the Independent Inquiry into Childhood Sexual Abuse in the United Kingdom The report concludes with some suggestions for change from the victims and survivors. Including, continuing to raise awareness of sexual abuse in a sports context, better support and protection for those coming forward and improving the communication that organistions have with survivors of abuse.Read More

Education Portal Tour

2020

(ICMEC) View this introductory tour on how to use the Education Portal.Read More

Glossary on Sexual Exploitation and Abuse

2017

(United Nations) This glossary aims at compiling existing terminology and nomenclature related to sexual exploitation and abuse to provide conceptual clarity and a common understanding of key terms used by different United Nations entities in the discourse on this topic.Read More

Child Protection, Missing Children/Child Abduction

Identifying Risk Factors for a Potential Parental Child Abduction

2017

(Return US Home) The following questions make use of the early identification risk factors for a Parental Child Abduction and can help to identify where a credible risk of abduction may exist. This list of questions is not exhaustive, but will help the court to determine when abduction prevention language should be utilized to protect the child’s right to safe and continual access to both parties. If one or more risk factors are identified, appropriate abduction prevention language should be utilized (see, Judicial Options for the Prevention of Parental Child Abduction and Parenting Plan Travel Restraint Examples).Read More

Child Protection, Missing Children/Child Abduction

Judicial Options for the Prevention of Parental Child Abduction

2017

(Return US Home) Return US Home developed these tiered options for PCA prevention language to aid U.S. attorneys and judges in protecting a child under their purview from a potential abduction.  The document may be used alone, but is intended to be informed by a review of evidence and in consultation with the identifiable risk factors questionnaire and the Parenting Plan Travel Restraint Examples.Read More

Child Protection, Missing Children/Child Abduction

Parenting Plan Travel Restraint Language Examples

2017

(Return US Home) This document provides examples of parenting plan travel restraints that have been reviewed by the appropriate U.S. Federal authorities for optimal language that may be acceptable for a Prevent Abduction Program request.Read More

Child Protection, Cybercrime

Digital Threats to Child Safety: A Brief Guide for Organizations on Outreach and Educational Activities

2016

(ROCIT) In recent years, the theme of digital safety for children has become more urgent, and has led to the active organization of outreach activities to raise awareness regarding dangerous Internet content and to promote prevention of Internet-based threats. Secondary educational institutions, clubs for children and teenagers, cultural institutions, as well as the Internet, and in some cases, can all be utilized to this end and may provide a more systematic approach/framework/structure to the activities. Often organizers are hindered by a lack of information and a clear understanding of what should be discussed at such events. This guide is designed specifically for those cases in which professionals, who wish to engage in outreach/awareness work focused on digital content safety for children and adolescents, must first obtain a basic understanding of the problem and its characteristics in order to determine the further selection of specific materials and literature. The guide addresses the primary types of Internet content that pose threats to children along with basic steps to stop the circulation of these types of content.Read More

Filter by

Filters