Private Aggregation API

Unofficial Proposal Draft,

More details about this document
This version:
https://patcg-individual-drafts.github.io/private-aggregation-api
Issue Tracking:
GitHub
Inline In Spec
Editor:
(Google)

Abstract

A generic API for measuring aggregate, cross-site data in a privacy preserving manner. The potentially identifying cross-site data is encapsulated into aggregatable reports. To prevent leakage, this data is encrypted, ensuring it can only be processed by an aggregation service. During processing, this service will add noise and impose limits on how many queries can be performed.

Status of this document

This document is an individual draft proposal. It has not been adopted by the Private Advertising Technology Community Group, but it may be discussed in that CG’s meetings. Please note that under the W3C Community Contributor License Agreement (CLA) there is a limited opt-out and other conditions apply. Learn more about W3C Community and Business Groups.

1. Introduction

This section is non-normative.

1.1. Motivation

Browsers are now working to prevent cross-site user tracking, including by partitioning storage and removing third-party cookies. There are a range of API proposals to continue supporting legitimate use cases in a way that respects user privacy. Many of these APIs, including the Shared Storage API and the Protected Audience API, isolate potentially identifying cross-site data in special contexts, which ensures that the data cannot escape the user agent.

Relative to cross-site data from an individual user, aggregate data about groups of users can be less sensitive and yet would be sufficient for a wide range of use cases. An aggregation service has been built to allow reporting noisy, aggregated cross-site data. This service was originally created for use by the Attribution Reporting API, but allowing more general aggregation supports additional use cases. In particular, the Protected Audience and Shared Storage APIs expect this functionality to be available.

1.2. Overview

This document oulines a general-purpose API that can be called from isolated contexts that have access to cross-site data (such as a Shared Storage worklet). Within these contexts, potentially identifying data can be encapsulated into "aggregatable reports". To prevent leakage, the cross-site data in these reports is encrypted to ensure it can only be processed by the aggregation service. During processing, this service adds noise and imposes limits on how many queries can be performed.

This API provides functions that allow the origin to construct an aggregatable report and specify the values to be embedded into its encrypted payload (for later computation via the aggregation service). These calls result in the aggregatable report being queued to be sent to the reporting endpoint of the script’s origin after a delay. After the endpoint receives the reports, it will batch the reports and send them to the aggregation service for processing. The output of that process is a summary report containing the (approximate) result, which is dispatched back to the script’s origin.

1.3. Alternative considered

Instead of the chosen API shape, we considered aligning with a design that is much closer to fetch(). However, there are a few key differences which make this unfavorable:

So, we chose the more tailored API shape detailed below.

2. Exposed interface

[Exposed=(InterestGroupScriptRunnerGlobalScope,SharedStorageWorklet),
 SecureContext]
interface PrivateAggregation {
  undefined contributeToHistogram(PAHistogramContribution contribution);
  undefined enableDebugMode(optional PADebugModeOptions options = {});
};

dictionary PAHistogramContribution {
  required bigint bucket;
  required long value;
  bigint filteringId = 0;
};

dictionary PADebugModeOptions {
  required bigint debugKey;
};

Per the Web Platform Design Principles, we should consider switching long to [EnforceRange] long long.

enableDebugMode(options)'s argument should not have a default value of {}. Alternatively, debugKey should not be required in PADebugModeOptions.

Each PrivateAggregation object has the following fields:

scoping details (default null)

A scoping details or null

allowed to use (default false)

A boolean

Note: See Exposing to global scopes below.

The contributeToHistogram(PAHistogramContribution contribution) method steps are:
  1. If contribution["bucket"] is not in the range [0, 2128−1], throw a RangeError.

  2. If contribution["value"] is negative, throw a RangeError.

  3. Let scopingDetails be this's scoping details.

  4. Let batchingScope be the result of running scopingDetailsget batching scope steps.

  5. Let filteringIdMaxBytes be the default filtering ID max bytes.

  6. If pre-specified report parameters map[batchingScope] exists:

    1. Set filteringIdMaxBytes to pre-specified report parameters map[batchingScope]'s filtering ID max bytes.

  7. If contribution["filteringId"] is not contained in the range 0 to 256filteringIdMaxBytes, exclusive, throw a RangeError.

  8. Let entry be a new contribution cache entry with the items:

    contribution

    contribution

    batching scope

    batchingScope

    debug scope

    The result of running scopingDetailsget debug scope steps.

  9. Append entry to the contribution cache.

Ensure errors are of an appropriate type, e.g. InvalidAccessError is deprecated.

Consider accepting an array of contributions. [Issue #44]

The enableDebugMode(optional PADebugModeOptions options) method steps are:
  1. Let scopingDetails be this's scoping details.

  2. Let debugScope be the result of running scopingDetailsget debug scope steps.

  3. If debug scope map[debugScope] exists, throw a "DataError" DOMException.

    Note: This would occur if enableDebugMode() has already been run for this debug scope.

  4. Let debugKey be null.

  5. If options was given:

    1. If options["debugKey] is not in the range [0, 264−1], throw a "DataError" DOMException.

    2. Set debugKey to options["debugKey].

  6. Let debugDetails be a new debug details with the items:

    enabled

    true

    key

    debugKey

  7. Optionally, set debugDetails to a new debug details.

    Note: This allows the user agent to make debug mode unavailable globally or just for certain callers.

  8. Set debug scope map[debugScope] to debugDetails.

Ensure errors are of an appropriate type, e.g. InvalidAccessError is deprecated.

3. Exposing to global scopes

To expose this API to a global scope, a read only attribute privateAggregation of type PrivateAggregation should be exposed on the global scope. Its getter steps should be set to the get the privateAggregation steps given this.

Each global scope should set the allowed to use for the PrivateAggregation object it exposes based on whether a relevant document is allowed to use the "private-aggregation" policy-controlled feature.

Additionally, each global scope should set the scoping details for the PrivateAggregation object it exposes to a non-null value. The global scope should wait to set the field until the API is intended to be available.

Shared Storage only allows Private Aggregation when an operation is being invoked, not in the top-level context:
class ExampleOperation {
  async run(data) {
    privateAggregation.contributeToHistogram(...)  // This is allowed.
  }
}
register('example-operation', ExampleOperation);

privateAggregation.contributeToHistogram(...)  // This would cause an error.

So, Shared Storage sets the scoping details immediately after the initial execution of the module script is complete.

For any batching scope returned by the get batching scope steps, the process contributions for a batching scope steps should later be performed given that same batching scope, the global scope’s relevant settings object's origin, some context type and a timeout (or null).

Note: This last requirement means that global scopes with different origins cannot share the same batching scope, see Same-origin policy discussion.

For any debug scope returned by the get debug scope steps, the mark a debug scope complete steps should later be performed given that same debug scope.

Note: A later algorithm asserts that, for any contribution cache entry in the contribution cache, the mark a debug scope complete steps were performed given the entry’s debug scope before the process contributions for a batching scope steps are performed given the entry’s batching scope.

4. Structures

4.1. Batching scope

A batching scope is a unique internal value that identifies which PAHistogramContributions should be sent in the same aggregatable report unless their debug details differ.

Unique internal value is not an exported definition. See infra/583.

4.2. Debug scope

A debug scope is a unique internal value that identifies which PAHistogramContributions should have their debug details affected by the presence or absence of a call to enableDebugMode() in the same period of execution.

4.3. Scoping details

A scoping details is a struct with the following items:
get batching scope steps

An algorithm returning a batching scope

get debug scope steps

An algorithm returning a debug scope

4.4. Debug details

A debug details is a struct with the following items:
enabled (default false)

A boolean

key (default null)

An unsigned 64-bit integer or null. The key must be null if enabled is false.

4.5. Contribution cache entry

A contribution cache entry is a struct with the following items:
contribution

A PAHistogramContribution

batching scope

A batching scope

debug scope

A debug scope

debug details (default null)

A debug details or null

4.6. Aggregatable report

An aggregatable report is a struct with the following items:

reporting origin

An origin

original report time

A moment

report time

A moment

contributions

A list of PAHistogramContributions

api

A context type

report ID

A string

debug details

A debug details

aggregation coordinator

An aggregation coordinator

context ID

A string or null

filtering ID max bytes

A positive integer

queued

A boolean

4.7. Aggregation coordinator

An aggregation coordinator is an origin that the allowed aggregation coordinator set contains.

Consider switching to the suitable origin concept used by the Attribution Reporting API here and elsewhere.

Move other structures to be defined inline instead of via a header. Consider also removing all the subheadings.

4.8. Context type

A context type is a string indicating what kind of global scope the PrivateAggregation object was exposed in. Each API exposing Private Aggregation should pick a unique string (or multiple) for this.

4.9. Pre-specified report parameters

A pre-specified report parameters is a struct with the following items:

context ID (default: null)

A string or null

filtering ID max bytes (default: default filtering ID max bytes)

A positive integer

5. Storage

A user agent holds an aggregatable report cache, which is a list of aggregatable reports.

A user agent holds an aggregation coordinator map, which is a map from batching scopes to aggregation coordinators.

A user agent holds a pre-specified report parameters map, which is a map from batching scopes to pre-specified report parameters.

A user agent holds a contribution cache, which is a list of contribution cache entries.

A user agent holds a debug scope map, which is a map from debug scopes to debug details.

Elsewhere, link to definition when using user agent.

5.1. Clearing storage

The user agent must expose controls that allow the user to delete data from the aggregatable report cache as well as any contribution history data stored for the consume budget if permitted algorithm.

The user agent may expose controls that allow the user to delete data from the contribution cache, the debug scope map and the pre-specified report parameters map.

6. Constants

Default filtering ID max bytes is a positive integer controlling the max bytes used if none is explicitly chosen. Its value is 1.

Valid filtering ID max bytes range is a set of positive integers controlling the allowable values of max bytes. Its value is the range 1 to 8, inclusive.

Consider adding more constants.

7. Implementation-defined values

Allowed aggregation coordinator set is a set of origins that controls which origins are valid aggregation coordinators. Every item in this set must be a potentially trustworthy origin.

Default aggregation coordinator is an aggregation coordinator that controls which is used for a report if none is explicitly selected.

Maximum report contributions is a positive integer that controls how many contributions can be present in a single report.

Minimum report delay is a non-negative duration that controls the minimum delay to deliver an aggregatable report.

Randomized report delay is a positive duration that controls the random delay to deliver an aggregatable report. This delay is additional to the minimum report delay.

8. Permissions Policy integration

This specification defines a policy-controlled feature identified by the string "private-aggregation". Its default allowlist is "*".

Note: The allowed to use field is set by other specifications that integrate with this API according to this policy-controlled feature.

9. Algorithms

To serialize an integer, represent it as a string of the shortest possible decimal number.

This would ideally be replaced by a more descriptive algorithm in Infra. See infra/201.

9.1. Exported algorithms

Note: These algorithms allow other specifications to integrate with this API.

To get the privateAggregation given a PrivateAggregation this:

  1. Let scopingDetails be this’s scoping details.

  2. If scopingDetails is null, throw a "NotAllowedError" DOMException.

    Note: This indicates the API is not yet available, for example, because the initial execution of the script after loading is not complete.

    Consider improving developer ergonomics here (e.g. a way to detect this case).

  3. If this’s allowed to use is false, throw an "InvalidAccessError" DOMException.

  4. Return this.

Ensure errors are of an appropriate type, e.g. InvalidAccessError is deprecated.

To append an entry to the contribution cache given a contribution cache entry entry:

  1. Append entry to the contribution cache.

To get a debug details given a debug scope debugScope, perform the following steps. They return a debug details.

  1. If debug scope map[debugScope] exists, return debug scope map[debugScope].

  2. Otherwise, return a new debug details.

To mark a debug scope complete given a debug scope debugScope and an optional debug details or null debugDetailsOverride (default null):

  1. Let debugDetails be debugDetailsOverride.

  2. If debug scope map[debugScope] exists:

    1. Assert: debugDetailsOverride is null.

      Note: The override can be provided if the debug details have not been set otherwise.

    2. Set debugDetails to debug scope map[debugScope].

    3. Remove debug scope map[debugScope].

    4. If debugDetails’s key is not null, assert: debugDetails’s enabled is true.

  3. If debugDetails is null, set debugDetails to a new debug details.

  4. For each entry of the contribution cache:

    1. If entry’s debug scope is debugScope, set entry’s debug details to debugDetails.

To determine if a report should be sent deterministically given a pre-specified report parameters preSpecifiedParams, perform the following steps. They return a boolean:

  1. If preSpecifiedParamscontext ID is not null, return true.

  2. If preSpecifiedParamsfiltering ID max bytes is not the default filtering ID max bytes, return true.

  3. Return false.

Note: If a context ID or non-default filtering ID max bytes was specified, a report is sent, even if there are no contributions or there is insufficent budget for the requested contributions. See Protecting against leaks via the number of reports.

To process contributions for a batching scope given a batching scope batchingScope, an origin reportingOrigin, a context type contextType and a moment or null timeout:

  1. Let batchEntries be a new list.

  2. For each entry of the contribution cache:

    1. If entry’s batching scope is batchingScope:

      1. Assert: entry’s debug details is not null.

        Note: This asserts that the mark a debug scope complete steps were run before the process contributions for a batching scope steps.

      2. Append entry to batchEntries.

  3. Let aggregationCoordinator be the default aggregation coordinator.

  4. If aggregation coordinator map[batchingScope] exists:

    1. Set aggregationCoordinator to aggregation coordinator map[batchingScope].

    2. Remove aggregation coordinator map[batchingScope].

  5. Let preSpecifiedParams be a new pre-specified report parameters.

  6. If pre-specified report parameters map[batchingScope] exists:

    1. Set preSpecifiedParams to pre-specified report parameters map[batchingScope].

    2. Remove pre-specified report parameters map[batchingScope].

  7. Let isDeterministicReport be the result of determining if a report should be sent deterministically given preSpecifiedParams.

  8. If isDeterministicReport is false, assert: timeout is null.

    Note: Timeouts can only be used for deterministic reports.

  9. If batchEntries is empty and isDeterministicReport is false, return.

  10. Let batchedContributions be a new ordered map.

  11. For each entry of batchEntries:

    1. Remove entry from the contribution cache.

    2. Let debugDetails be entry’s debug details.

    3. If batchedContributions[debugDetails] does not exist:

      1. Set batchedContributions[debugDetails] to a new list.

    4. Append entry’s contribution to batchedContributions[debugDetails].

  12. If batchedContributions is empty:

    1. Let debugDetails be a new debug details.

    2. Set batchedContributions[debugDetails] to a new list.

  13. For each debugDetailscontributions of batchedContributions:

    1. Perform the report creation and scheduling steps with reportingOrigin, contextType, contributions, debugDetails, aggregationCoordinator, preSpecifiedParams and timeout.

Note: These steps break up the contributions based on their debug details as each report can only have one set of metadata.

To determine if an origin is an aggregation coordinator given an origin origin, perform the following steps. They return a boolean.
  1. Return whether origin is an aggregation coordinator.

To set the aggregation coordinator for a batching scope given an origin origin and a batching scope batchingScope:
  1. Assert: origin is an aggregation coordinator.

  2. Set aggregation coordinator map[batchingScope] to origin.

Elsewhere, surround algorithms in a <div algorithm> block to match, and add styling for all algorithms per bikeshed/1472.

To set the pre-specified report parameters for a batching scope given a pre-specified report parameters params and a batching scope batchingScope:

  1. Let contextId be paramscontext ID.

  2. Assert: contextId is null or contextId’s length is not larger than 64.

  3. Let filteringIdMaxBytes be params filtering ID max bytes.

  4. Assert: filteringIdMaxBytes is contained in the valid filtering ID max bytes range

  5. Set pre-specified report parameters map[batchingScope] to params.

9.2. Scheduling reports

To perform the report creation and scheduling steps with an origin reportingOrigin, a context type api, a list of PAHistogramContributions contributions, a debug details debugDetails, an aggregation coordinator aggregationCoordinator, a pre-specified report parameters preSpecifiedParams and a moment or null timeout:

  1. Assert: reportingOrigin is a potentially trustworthy origin.

  2. Optionally, return.

    Note: This implementation-defined condition is intended to allow user agents to drop reports for a number of reasons, for example user opt-out or an origin not being enrolled.

  3. Let truncatedContributions be a new list.

  4. If contributions has a size greater than maximum report contributions:

    1. For each n of the range 0 to maximum report contributions, exclusive:

      1. Append contributions[n] to truncatedContributions.

  5. Otherwise, set truncatedContributions to contributions.

  6. Let contributionSum be 0.

  7. For each contribution of truncatedContributions:

    1. Assert: contribution["value"] is non-negative.

    2. Add contribution["value"] to contributionSum.

  8. Let currentWallTime be the current wall time.

  9. Let sufficientBudget be the result of consuming budget if permitted given contributionSum, reportingOrigin, api and currentWallTime.

  10. If sufficientBudget is false:

    1. Let isDeterministicReport be the result of determining if a report should be sent deterministically given preSpecifiedParams.

    2. If isDeterministicReport is false, return.

    3. Empty truncatedContributions.

  11. Let report be the result of obtaining an aggregatable report given reportingOrigin, api, truncatedContributions, debugDetails, aggregationCoordinator, preSpecifiedParams, timeout and currentWallTime.

  12. Append report to the user agent’s aggregatable report cache.

To consume budget if permitted given a long value, an origin origin, a context type api and a moment currentTime, perform implementation-defined steps. They return a boolean, which indicates whether there is sufficient 'contribution budget' left to send the requested contribution value. This budget should be bound to usage over time, e.g. the contribution sum over the last 24 hours. The algorithm should assume that the contribution will be sent if and only if true is returned, i.e. it should consume the budget in that case. If value is zero, this algorithm should return true.

To obtain an aggregatable report given an origin reportingOrigin, a context type api, a list of PAHistogramContributions contributions, a debug details debugDetails, an aggregation coordinator aggregationCoordinator, a pre-specified report parameters preSpecifiedParams, a [=moment] or null timeout and a moment currentTime, perform the following steps. They return an aggregatable report.

  1. Assert: reportingOrigin is a potentially trustworthy origin.

  2. Let reportTime be the result of running obtain a report delivery time given currentTime and timeout.

  3. Let report be a new aggregatable report with the items:

    reporting origin

    reportingOrigin

    original report time

    reportTime

    report time

    reportTime

    contributions

    contributions

    api

    api

    report ID

    The result of generating a random UUID.

    debug details

    debugDetails

    aggregation coordinator

    aggregationCoordinator

    context ID

    preSpecifiedParamscontext ID

    filtering ID max bytes

    preSpecifiedParamsfiltering ID max bytes

    queued

    false

  4. Return report.

To obtain a report delivery time given a moment currentTime and a moment or null timeout, perform the following steps. They return a moment.

  1. If timeout is not null:

    1. Return timeout.

  2. If automation local testing mode enabled is true, return currentTime.

  3. Let r be a random double between 0 (inclusive) and 1 (exclusive) with uniform probability.

  4. Return currentTime + minimum report delay + r * randomized report delay.

9.3. Sending reports

Note: This section is largely copied from the Attribution Reporting API spec, adapting as necessary.

Do we have to use the queue a task algorithm here?

The user agent must periodically attempt to queue reports for sending given its aggregatable report cache.

To attempt to queue reports for sending given a list of aggregatable reports reports:

  1. For each report of reports, run these steps in parallel:

    1. Run these steps, but abort when the user agent shuts down:

      1. If report’s queued value is true, return.

      2. Set report’s queued value to true.

      3. Let currentWallTime be the current wall time.

      4. If report’s report time is before currentWallTime, set report’s report time to currentWallTime plus an implementation-defined random non-negative duration.

        Note: On startup, it is possible the user agent will need to send many reports whose report times passed while the browser was closed. Adding random delay prevents temporal joining of reports.

      5. Wait until the current wall time is equal to or after report’s report time.

      6. Optionally, wait a further implementation-defined non-negative duration.

        Note: This is intended to allow user agents to optimize device resource usage and wait for the user agent to be online.

      7. Run attempt to deliver a report with report.

    2. If aborted, set report’s queued value to false.

      Note: It might be more practical to perform this step when the user agent next starts up.

To attempt to deliver a report given an aggregatable report report:

  1. Let url be the result of obtaining a reporting endpoint given report’s reporting origin and report’s api.

  2. Let data be the result of serializing an aggregatable report given report.

  3. If data is an error, remove report from the aggregatable report cache.

    Do we need to queue this task?

  4. Let request be the result of creating a report request given url and data.

  5. Queue a task to fetch request with processResponse being the following steps:

    1. Let shouldRetry be an implementation-defined boolean. The value should be false if no error occurred.

    2. If shouldRetry is true:

      1. Set report’s report time to the current wall time plus an implementation-defined non-negative duration.

      2. Set report’s queued value to false.

    3. Otherwise, remove report from the aggregatable report cache.

To obtain a reporting endpoint given an origin reportingOrigin and context type api, perform the following steps. They return a URL.

  1. Assert: reportingOrigin is a potentially trustworthy origin.

  2. Let path be the concatenation of «".well-known/private-aggregation/report-", api».

    Register this well-known directory. [Issue #67]

  3. Let base be the result on running the URL parser on the serialization of reportingOrigin.

  4. Assert: base is not failure.

  5. Let result be the result of running the URL parser on path with base.

  6. Assert: result is not failure.

  7. Return result.

To create a report request given a URL url and a byte sequence body:

  1. Let request be a new request with the following properties:

    method

    "POST"

    URL

    url

    header list

    «("Content-Type", "application/json")»

    unsafe-request flag

    set

    body

    body

    client

    null

    window

    "no-window"

    service-workers mode

    "none"

    initiator

    ""

    referrer

    "no-referrer"

    mode

    "cors"

    credentials mode

    "omit"

    cache mode

    "no-store"

  2. Return request.

9.4. Serializing reports

Note: This section is largely copied from the Attribution Reporting API spec, adapting as necessary.

To serialize an aggregatable report given an aggregatable report report, perform the following steps. They return a byte sequence or an error.

  1. Let aggregationServicePayloads be the result of obtaining the aggregation service payloads given report.

  2. If aggregationServicePayloads is an error, return aggregationServicePayloads.

  3. Let data be an ordered map of the following key/value pairs:

    "aggregation_coordinator_origin"

    report’s aggregation coordinator, serialized.

    "aggregation_service_payloads"

    aggregationServicePayloads

    "shared_info"

    The result of obtaining a report’s shared info given report.

  4. Let debugKey be report’s debug details's key.

  5. If debugKey is not null, set data["debug_key"] to debugKey.

  6. Let contextId be report’s context ID.

  7. If contextId is not null, set data["context_id"] to contextId.

  8. Return the byte sequence resulting from executing serialize an infra value to JSON bytes on data.

To obtain the aggregation service payloads given an aggregatable report report, perform the following steps. They return a list of maps or an error.

  1. Let publicKeyTuple be the result of obtaining the public key for encryption given report’s aggregation coordinator.

  2. If publicKeyTuple is an error, return publicKeyTuple.

  3. Let (pkR, keyId) be publicKeyTuple.

  4. Let plaintextPayload be the result of obtaining the plaintext payload given report.

  5. Let sharedInfo be the result of obtaining a report’s shared info given report.

  6. Let encryptedPayload be the result of encrypting the payload given plaintextPayload, pkR and sharedInfo.

  7. If encryptedPayload is an error, return encryptedPayload.

  8. Let aggregationServicePayloads be a new list.

  9. Let aggregationServicePayload be an ordered map of the following key/value pairs:

    "key_id"

    keyId

    "payload"

    encryptedPayload, base64 encoded

  10. If report’s debug details's enabled field is true:

    1. Set aggregationServicePayload[debug_cleartext_payload] to plaintextPayload, base64 encoded.

  11. Append aggregationServicePayload to aggregationServicePayloads.

  12. Return aggregationServicePayloads.

To obtain the public key for encryption given an aggregation coordinator aggregationCoordinator, perform the following steps. They return a tuple consisting of a public key and a string, or an error.

  1. Let url be a new URL record.

  2. Set url’s scheme to aggregationCoordinator’s scheme.

  3. Set url’s host to aggregationCoordinator’s host.

  4. Set url’s port to aggregationCoordinator’s port.

  5. Set url’s path to «".well-known", "aggregation-service", "v1", "public-keys"».

  6. Return an implementation-defined tuple consisting of a public key from url and a string that should uniquely identify the public key or, in the event that the user agent failed to obtain the public key from url, an error. This step may be asynchronous.

Specify this in terms of fetch. Add details about which encryption standards to use, length requirements, etc.

Note: The user agent is encouraged to enforce regular key rotation. If there are multiple keys, the user agent can independently pick a key uniformly at random for every encryption operation.

To obtain the plaintext payload given an aggregatable report report, perform the following steps. They return a byte sequence.

  1. Let payloadData be a new list.

  2. Let contributions be report’s contributions.

  3. Assert: contributionssize is not greater than maximum report contributions.

  4. While contributionssize is less than maximum report contributions:

    1. Let nullContribution be a new PAHistogramContribution with the items:

      bucket

      0

      value

      0

      filteringId

      0

    2. Append nullContribution to contributions.

    Note: This padding protects against the number of contributions being leaked through the encrypted payload size, see discussion below.

  5. For each contribution of report’s contributions:

    1. Let filteringIdMaxBytes be report’s filtering id max bytes.

    2. Assert: contribution["filteringId"] is contained in the range 0 to 256filteringIdMaxBytes, exclusive.

    3. Let contributionData be an ordered map of the following key/value pairs:

      "bucket"

      The result of encoding an integer for the payload given contribution["bucket"] and 16.

      "value"

      The result of encoding an integer for the payload given contribution["value"] and 4.

      "id"

      The result of encoding an integer for the payload given contribution[="filteringId"] and filteringIdMaxBytes.

    4. Append contributionData to payloadData.

  6. Let payload be an ordered map of the following key/value pairs:

    "data"

    payloadData

    "operation"

    "histogram"

  7. Return the byte sequence resulting from CBOR encoding payload.

To encrypt the payload given a byte sequence plaintextPayload, public key pkR and a string sharedInfo, perform the following steps. They return a byte sequence or an error.

  1. Let info be the result of UTF-8 encoding the concatenation of « "aggregation_service", sharedInfo ».

  2. Let (kem_id, kdf_id, aead_id) be (0x0020, 0x0001, 0x0003).

    Note: These indicate the HPKE algorithm identifiers, specifying the KEM function as DHKEM(X25519, HKDF-SHA256), the KDF function as HKDF-SHA256 and the AEAD function as ChaCha20Poly1305.

  3. Let hpkeContext be the result of setting up an HPKE sender’s context with pkR, info, kem_id, kdf_id and aead_id.

  4. Let aad be `` (an empty byte sequence).

  5. Let encryptedPayload be the result of encrypting plaintextPayload with hpkeContext and aad.

To encode an integer for the payload given an integer intToEncode and an integer byteLength, return the representation of intToEncode as a big-endian byte sequence of length byteLength, left padding with zeroes as necessary.

To obtain a report’s shared info given an aggregatable report report, perform the following steps. They return a string.

  1. Let scheduledReportTime be the duration from the UNIX epoch to report’s original report time.

  2. Let sharedInfo be an ordered map of the following key/value pairs:

    "api"

    report’s api

    "report_id"

    report’s report ID

    "reporting_origin"

    The serialization of report’s reporting origin

    "scheduled_report_time"

    The number of seconds in scheduledReportTime, rounded down to the nearest number of whole seconds and serialized

    "version"

    "1.0"

  3. Return the result of serializing an infra value to a json string given sharedInfo.

10. User-agent automation

A user agent holds a boolean automation local testing mode enabled (default false).

For the purposes of user-agent automation and website testing, this document defines the below [WebDriver] extension commands to control the API configuration.

10.1. Set local testing mode

HTTP Method URI Template
POST /session/{session id}/private-aggregation/localtestingmode

The remote end steps are:

  1. If parameters is not a JSON-formatted Object, return a WebDriver error with error code invalid argument.

  2. Let enabled be the result of getting a property named "enabled" from parameters.

  3. If enabled is undefined or is not a boolean, return a WebDriver error with error code invalid argument.

  4. Set automation local testing mode enabled to enabled.

  5. Return success with data null.

Note: Without this, aggregatable reports would be subject to delays, making testing difficult.

11. Shared Storage API monkey patches

This should be moved to the Shared Storage spec. [Issue #43]

Go through all monkey patches and ensure every definition (including) structures that is needed is exported.

partial interface SharedStorageWorkletGlobalScope {
  readonly attribute PrivateAggregation privateAggregation;
};

dictionary SharedStoragePrivateAggregationConfig {
  USVString aggregationCoordinatorOrigin;
  USVString contextId;
  [EnforceRange] unsigned long long filteringIdMaxBytes;
};

partial dictionary SharedStorageRunOperationMethodOptions {
  SharedStoragePrivateAggregationConfig privateAggregationConfig;
};

The privateAggregation getter steps are to get the privateAggregation given this.

Add the following algorithm in the subsection "Run Operation Methods":

To obtain the aggregation coordinator given a SharedStorageRunOperationMethodOptions options, perform the following steps. They return an aggregation coordinator, null or a DOMException:

  1. If options["privateAggregationConfig"] does not exist, return null.

  2. If options["privateAggregationConfig"]["aggregatonCoordinatorOrigin"] does not exist, return null.

  3. Let url be the result of running the URL parser on options["privateAggregationConfig"]["aggregatonCoordinatorOrigin"].

  4. If url is failure or null, return a new DOMException with name "SyntaxError".

    Consider throwing an error if the path is not empty.

  5. Let origin be url’s origin.

  6. If the result of determining if an origin is an aggregation coordinator given origin is false, return a new DOMException with name "DataError".

  7. Return origin.

To obtain the pre-specified report parameters given a SharedStorageRunOperationMethodOptions options, perform the following steps. They return a pre-specified report parameters, null, or a DOMException:

  1. If options["privateAggregationConfig"] does not exist, return null.

  2. Let privateAggregationConfig be options["privateAggregationConfig"].

  3. Let contextId be null.

  4. If privateAggregationConfig["contextId"] exists, set contextId to privateAggregationConfig["contextId"].

  5. If contextId’s length is greater than 64, return a new DOMException with name "DataError".

  6. Let filteringIdMaxBytes be the default filtering ID max bytes.

  7. If privateAggregationConfig["filteringIdMaxBytes"] exists, set filteringIdMaxBytes to privateAggregationConfig["filteringIdMaxBytes"].

  8. If filteringIdMaxBytes is not contained in the valid filtering ID max bytes range, return a new DOMException with name "DataError".

  9. Return a new pre-specified report parameters with the items:

    context ID

    contextId

    filtering ID max bytes

    filteringIdMaxBytes

The WindowSharedStorage's run() method steps are modified in four ways. First, add the following steps just after step 2 ("If addModule() has not yet been called, ..."), renumbering later steps as appropriate:

  1. Let preSpecifiedParams be the result of obtaining the pre-specified report parameters given options.

  2. If preSpecifiedParams is a DOMException, return a promise rejected with preSpecifiedParams.

  3. Let aggregationCoordinator be the result of obtaining the aggregation coordinator given options.

  4. If aggregationCoordinator is a DOMException, return a promise rejected with aggregationCoordinator.

Second, add the following steps in the nested scope just after "Let operation be operationMap[name]." (renumbering later steps as appropriate):
  1. Let batchingScope be a new batching scope.

  2. Let debugScope be a new debug scope.

  3. Let privateAggregationTimeout be null.

  4. Let isDeterministicReport be false.

  5. If preSpecifiedParams is not null:

    1. Set isDeterministicReport to the result of determining if a report should be sent deterministically given preSpecifiedParams.

    2. If isDeterministicReport:

      1. Set privateAggregationTimeout to the current wall time plus the deterministic operation timeout duration.

    3. Set the pre-specified report parameters for a batching scope given preSpecifiedParams and batchingScope.

  6. If aggregationCoordinator is not null, set the aggregation coordinator for a batching scope given aggregationCoordinator and batchingScope.

Third, add the following steps in the same nested scope just before the current penultimate step ("If options contains data", renumbering the last step as appropriate):

  1. Let hasRunPrivateAggregationCompletionTask be false.

  2. Let privateAggregationCompletionTask be an algorithm to perform the following steps:

    1. If hasRunPrivateAggregationCompletionTask, return.

    2. Set hasRunPrivateAggregationCompletionTask to true.

    3. Mark a debug scope complete given debugScope.

    4. Process contributions for a batching scope given batchingScope, outsideSettingsorigin, "shared-storage" and privateAggregationTimeout.

  3. If isDeterministicReport>, run the following steps in parallel:

    1. Wait until privateAggregationTimeout.

    2. Run privateAggregationCompletionTask.

Finally, at the end of the same nested scope, add the following step:

  1. When the above call returns, perform the following steps:

    1. Run privateAggregationCompletionTask.

The WindowSharedStorage's selectURL() method steps are modified in three ways. First, add the following steps just after step 5 ("If addModule() has not yet been called, ..."), renumbering later steps:

  1. Let preSpecifiedParams be the result of obtaining the pre-specified report parameters given options.

  2. If preSpecifiedParams is a DOMException, return a promise rejected with preSpecifiedParams.

  3. Let aggregationCoordinator be the result of obtaining the aggregation coordinator given options.

  4. If aggregationCoordinator is a DOMException, return a promise rejected with aggregationCoordinator.

Second, add the following steps in the nested scope just after "Let operation be operationMap[name]." (renumbering later steps as appropriate):
  1. Let batchingScope be a new batching scope.

  2. Let debugScope be a new debug scope.

  3. Let privateAggregationTimeout be null.

  4. Let hasRunPrivateAggregationCompletionTask be false.

  5. Let privateAggregationCompletionTask be an algorithm to perform the following steps:

    1. If hasRunPrivateAggregationCompletionTask, return.

    2. Set hasRunPrivateAggregationCompletionTask to true.

    3. Mark a debug scope complete given debugScope.

    4. Process contributions for a batching scope given batchingScope, outsideSettingsorigin, "shared-storage" and privateAggregationTimeout.

  6. If aggregationCoordinator is not null, set the aggregation coordinator for a batching scope given aggregationCoordinator and batchingScope.

  7. If preSpecifiedParams is not null:

    1. Let isDeterministicReport be the result of determining if a report should be sent deterministically given preSpecifiedParams.

    2. If isDeterministicReport:

      1. Set privateAggregationTimeout to the current wall time plus the deterministic operation timeout duration.

    3. Set the pre-specified report parameters for a batching scope given preSpecifiedParams and batchingScope.

    4. If isDeterministicReport, run the following steps in parallel:

      1. Wait until privateAggregationTimeout.

      2. Run privateAggregationCompletionTask.

Finally, at the end of the same nested scope, add the following steps:
  1. Run privateAggregationCompletionTask.

Once shared-storage/88 is resolved, align the above monkey patches with how keepAlive is handled at operation completion.

The addModule() steps are modified to add a new step just before the final step ("Return promise."), renumbering the last step as appropriate:

  1. If this is a SharedStorageWorklet, upon fulfillment of promise or upon rejection of promise, run the following steps:

    1. Let globalScopes be this’s global scopes.

    2. Assert: globalScopessize equals 1.

    3. Let privateAggregationObj be globalScopes[0]'s privateAggregation.

    4. Set privateAggregationObj’s allowed to use to the result of determining whether this's relevant global object's associated document is allowed to use the "private-aggregation" policy-controlled feature.

      Consider adding an early return here (and equivalently for Protected Audience) if the permissions policy check is made first.

    5. Set privateAggregationObj’s scoping details to a new scoping details with the items:

      get batching scope steps

      An algorithm that returns the batching scope that is scheduled to be passed to process contributions for a batching scope when the call currently executing in scope returns.

      get debug scope steps

      An algorithm that returns the debug scope that is scheduled to be passed to mark a debug scope complete when the call currently executing in scope returns.

      Note: Multiple operation invocations can be in-progress at the same time, each with a different batching scope and debug scope. However, only one can be currently executing.

Once shared-storage/89 is resolved, align the above monkey patch with how access to sharedStorage is prevented in SharedStorageWorkletGlobalScopes until addModule()'s initial execution is complete.

Note: This extends Shared Storage’s existing addModule() monkey patch.

11.1. Implementation-defined values

Deterministic operation timeout duration is a non-negative duration that controls how long a Shared Storage operation may make Private Aggregation contributions if it is triggering a deterministic report and, equivalently, when that report should be sent after the operation begins.

12. Protected Audience API monkey patches

This should be moved to the Protected Audience spec, along with any other Protected Audience-specific details. [Issue #43]

12.1. New WebIDL

partial interface InterestGroupScriptRunnerGlobalScope {
  readonly attribute PrivateAggregation privateAggregation;
};

dictionary PASignalValue {
  required DOMString baseValue;
  double scale;
  (bigint or long) offset;
};

dictionary PAExtendedHistogramContribution {
  required (PASignalValue or bigint) bucket;
  required (PASignalValue or long) value;
  bigint filteringId = 0;
};

[Exposed=InterestGroupScriptRunnerGlobalScope, SecureContext]
partial interface PrivateAggregation {
  undefined contributeToHistogramOnEvent(
      DOMString event, PAExtendedHistogramContribution contribution);
};

dictionary AuctionReportBuyersConfig {
  required bigint bucket;
  required double scale;
};

dictionary AuctionReportBuyerDebugModeConfig {
  boolean enabled = false;

  // Must only be provided if `enabled` is true.
  bigint? debugKey;
};

partial dictionary AuctionAdConfig {
  sequence<bigint> auctionReportBuyerKeys;
  record<DOMString, AuctionReportBuyersConfig> auctionReportBuyers;
  AuctionReportBuyerDebugModeConfig auctionReportBuyerDebugModeConfig;
};

Note: requiredSellerCapabilities is defined in the Protected Audience spec.

Do we want to align naming with implementation?

The privateAggregation getter steps are to get the privateAggregation given this.

The contributeToHistogramOnEvent(DOMString event, PAExtendedHistogramContribution contribution) method steps are:
  1. Let scopingDetails be this's scoping details.

  2. If event starts with "reserved." and « "reserved.always", "reserved.loss", "reserved.win" » does not contain event, return.

    Note: No error is thrown to allow forward compatibility if additional reserved event types are added later.

  3. Let bucket be contribution["bucket"].

  4. If bucket is a PASignalValue:

    1. If bucket["baseValue"] is not a valid signal base value, throw a TypeError.

    2. If bucket["offset"] is not a bigint, throw a TypeError.

  5. Otherwise, if contribution["bucket"] is not in the range [0, 2128−1], throw a TypeError.

    Make the error type consistent with contributeToHistogram(contribution).

  6. Let value be contribution["value"].

  7. If value is a PASignalValue:

    1. If value["baseValue"] is not a valid signal base value, throw a TypeError.

    2. If value["offset"] is a bigint, throw a TypeError.

  8. Otherwise, if contribution["value"] is negative, throw a TypeError.

  9. If contribution["filteringId"] is not contained in the range 0 to 256default filtering ID max bytes, exclusive, throw a TypeError.

    Make the error types on validation issues here and above consistent with contributeToHistogram(contribution).

    Note: It is not currently possible to set a non-default filtering ID max bytes for Protected Audience.

  10. Let batchingScope be null.

  11. If event starts with "reserved.", set batchingScope to the result of running scopingDetailsget batching scope steps.

    Note: Each non-reserved event will have a different batching scope that is created later.

  12. Let entry be a new on event contribution cache entry with the items:

    contribution

    contribution

    batching scope

    batchingScope

    debug scope

    The result of running scopingDetailsget debug scope steps.

    worklet function

    The worklet function that is currently being executed.

  13. Let global be this's relevant global object.

  14. Let auctionConfig be global’s auction config.

  15. Let ig be the result of maybe obtaining an interest group given global.

  16. Let cacheMap be auctionConfig’s per-bid or seller on event contribution cache.

  17. If cacheMap[ig] does not exist, set cacheMap[ig] to a new on event contribution cache.

  18. Let onEventContributionCache be cacheMap[ig].

  19. If onEventContributionCache[event] does not exist, set onEventContributionCache[event] to a new list.

  20. Append entry to onEventContributionCache[event].

Ensure errors are of an appropriate type, e.g. InvalidAccessError is deprecated.

Consider accepting an array of contributions. [Issue #44]

12.2. WebIDL modifications

The AuctionAdConfig and AuctionAdInterestGroup dictionaries are modified to add a new field:

dictionary ProtectedAudiencePrivateAggregationConfig {
  USVString aggregationCoordinatorOrigin;
};

partial dictionary AuctionAdConfig {
  ProtectedAudiencePrivateAggregationConfig privateAggregationConfig;
};

partial dictionary AuctionAdInterestGroup {
  ProtectedAudiencePrivateAggregationConfig privateAggregationConfig;
};

Note: sellerCapabilities is defined in the Protected Audience spec

12.3. Structures

12.3.1. Extending auction config

Extend the auction config struct to add new fields:

per-bid or seller on event contribution cache

A map from interest group or null to a on event contribution cache.

Note: a null key represents the seller.

batching scope map

A map from a tuple consisting of an origin (an origin) and a coordinator (an aggregation coordinator) to a batching scope.

Note: Does not include batching scopes for contributions conditional on non-reserved events.

permissions policy state

A permissions policy state.

seller Private Aggregation coordinator

An aggregation coordinator. Defaults to the default aggregation coordinator.

auction report buyer keys

A map from buyer origins to bigints.

auction report buyers

A map from strings to AuctionReportBuyersConfigs.

auction report buyer debug details

A debug details.

Consider replacing the strings above with specific enum types.

12.3.2. Extending interest group

Extend the interest group struct to add a new field:

Private Aggregation coordinator

An aggregation coordinator or null.

Note: a null value specifies the default coordinator.


Add the following definitions in a new subsection at the end of Structures, renumbered appropriately.

12.3.3. Permissions policy state

A permissions policy state is a struct with the following items:
private aggregation enabled

A boolean (default false)

12.3.4. Signal base value

A signal base value is one of the following:
"winning-bid"

The numeric value is the bid value of the winning bid.

"highest-scoring-other-bid"

The numeric value is the bid value of the highest scoring bid that did not win.

"script-run-time"

The numeric value is the number of milliseconds of CPU time the calling function (e.g. generateBid()) took to run.

"signals-fetch-time"

The numeric value is the number of milliseconds it took for the trusted bidding or scoring signals fetch to complete, when called from generateBid() or scoreAd(), respectively.

Can this value be used in reportWin() or reportResult()?

"bid-reject-reason"

The numeric value is an integer representing the reason a bid was rejected.

Note: this mapping to an integer is defined in determine a signal’s numeric value.

12.3.5. Worklet function

A worklet function is one of the following:
"generate-bid"

The generateBid() function.

"score-ad"

The scoreAd() function.

"report-result"

The reportResult() function.

"report-win"

The reportWin() function.

12.3.6. On event contribution cache entry

An on event contribution cache entry is a struct with the following items:
contribution

A PAExtendedHistogramContribution

batching scope

A batching scope or null

debug scope

A debug scope

debug details

A debug details or null (default null)

worklet function

A worklet function

12.3.7. On event contribution cache

An on event contribution cache is a map from string to a list of on event contribution cache entries.

12.3.8. Extending InterestGroupScriptRunnerGlobalScope

Extend the global scopes subsection to add:

Each InterestGroupScriptRunnerGlobalScope has an:

auction config

An auction config

12.3.9. Extending InterestGroupReportingScriptRunnerGlobalScope

Extend the InterestGroupReportingScriptRunnerGlobalScope subsection to add an extra field to the end of the list beginning "Each InterestGroupReportingScriptRunnerGlobalScope has a":

interest group

Null or an interest group. Null for seller reporting (i.e. reportResult()).

12.4. Algorithm modifications

The joinAdInterestGroup() method steps are modified to add the following steps at the end of the scope nested under step 5 ("Validate the given group and ..."):

  1. If group[privateAggregationConfig] exists:

    1. Let aggregationCoordinator be the result of obtaining the Private Aggregation coordinator given group[privateAggregationConfig].

    2. If aggregationCoordinator is a DOMException, then throw aggregationCoordinator.

    3. Set interestGroup’s Private Aggregation coordinator to aggregationCoordinator.

The runAdAuction() method steps are modified to add the following step just after step 5 ("If auctionConfig is a failure, then..."), renumbering the later steps as appropriate:

  1. Set auctionConfig’s permissions policy state to a new permissions policy state with the items:

    private aggregation enabled

    The result of determining whether global’s associated Document is allowed to use the "private-aggregation" policy-controlled feature.

The validate and convert auction ad config steps are modified to add the following steps just before the last step ("Return auctionConfig"), renumbering the later step as appropriate:

  1. If config["auctionReportBuyerKeys"] exists:

    1. Let interestGroupBuyers be auctionConfig’s interest group buyers.

    2. If interestGroupBuyers is null, set interestGroupBuyers to a new list.

    3. For each index of the range 0 to config["auctionReportBuyerKeys"]'s size, exclusive:

      1. Let key be config["auctionReportBuyerKeys"][index].

      2. If key is not in the range [0, 2128−1], throw a TypeError.

      3. If index is equal to or greater than interestGroupBuyers size, continue.

        Note: Continue is used (instead of break) to match validation logic for all given buyer keys.

      4. Let origin be interestGroupBuyers[index].

      5. Set auctionConfig’s auction report buyer keys[origin] to key.

        Check behavior when an origin is repeated in interestGroupBuyers.

  2. If config["auctionReportBuyers"] exists:

    1. For each reportTypereportBuyerConfig of config["auctionReportBuyers"]:

      1. If « "interestGroupCount", "bidCount", "totalGenerateBidLatency", "totalSignalsFetchLatency" » does not contain reportType, continue.

        Note: No error is thrown to allow forward compatibility if additional report types are added later.

        Should these strings be dash delimited?

      2. If reportBuyerConfig["bucket"] is not in the range [0, 2128−1], throw a TypeError.

        Consider validating the case where the bucket used (after summing) is too large. Currently, the implementation appears to overflow. See protected-audience/1040.

      3. Set auctionConfig’s auction report buyers[reportType] to reportBuyerConfig.

  3. Set auctionConfig’s auction report buyer debug details to a new debug details.

  4. If config[auctionReportBuyerDebugModeConfig] exists:

    1. Let debugModeConfig be config[auctionReportBuyerDebugModeConfig].

    2. Let enabled be debugModeConfig[enabled].

    3. Let debugKey be debugModeConfig[debugKey].

    4. If debugKey is not null:

      1. If debugKey is not in the range [0, 264−1], throw a TypeError.

      2. If enabled is false, throw a TypeError.

    5. Set auctionConfig’s auction report buyer debug details to a new debug details with the items:

      enabled

      enabled

      key

      debugKey

  5. If config[privateAggregationConfig] exists:

    1. Let aggregationCoordinator be the result of obtaining the Private Aggregation coordinator given config[privateAggregationConfig].

    2. If aggregationCoordinator is a DOMException, return failure.

    3. Set auctionConfig’s seller Private Aggregation coordinator to aggregationCoordinator.

Make all map indexing links (throughout the spec) where possible, i.e. matching this section.

The generate and score bids algorithm is modified by inserting the following step before each of the two "Return leadingBidInfo’s leading bid" steps (one in a nested scope), renumbering this and later steps as necessary.

  1. Process the Private Aggregation contributions for an auction given auctionConfig and leadingBidInfo.

The evaluate a script steps are modified in two ways. First, we add the following steps after step 11 ("If evaluationStatus is an abrupt completion..."), renumbering later steps as appropriate:

  1. Set global’s privateAggregation's allowed to use to auctionConfig’s permissions policy state's private aggregation enabled.

  2. Let debugScope be a new debug scope.

  3. Set global’s privateAggregation's scoping details to a new scoping details with the items:

    get batching scope steps

    An algorithm that performs the following steps:

    1. Let origin be realm’s settings object's origin.

    2. Let ig be the result of maybe obtaining an interest group given realm’s global object.

    3. Let aggregationCoordinator be null.

    4. If ig is not null, set aggregationCoordinator to ig’s Private Aggregation coordinator.

    5. Otherwise, set aggregationCoordinator to auctionConfig’s seller Private Aggregation coordinator.

    6. If aggregationCoordinator is null, set aggregationCoordinator to the default aggregation coordinator.

    7. Return the result of running get or create a batching scope given origin, aggregationCoordinator and auctionConfig.

    get debug scope steps

    An algorithm that returns debugScope.

Once protected-audience/615 is resolved, align the above monkey patch with how access to other functions is prevented in InterestGroupScriptRunnerGlobalScopes until the script’s initial execution is complete.

Second, in the nested scope of the last step, we insert a new step just after the step labelled "Clean up after script", renumbering the later step as appropriate:

  1. Let debugDetails be the result of get a debug details given debugScope.

  2. Let ig be the result of maybe obtaining an interest group given global.

  3. Let onEventContributionCache be auctionConfig’s per-bid or seller on event contribution cache[ig].

  4. For each evententries of onEventContributionCache:

    1. For each onEventEntry of entries:

      1. If onEventEntry’s debug scope is debugScope, set onEventEntry’s debug details to debugDetails.

  5. Mark a debug scope complete given debugScope.

The evaluate a bidding script steps are modified in the following two ways. First, we add a new parameter auction config auctionConfig.

Note: This algorithm already takes an interest group parameter ig.

Second, we add the following step after step 6 ("Set global’s interest group to ig"), renumbering later steps as appropriate:

  1. Set global’s auction config to auctionConfig.

The evaluate a scoring script steps are modified in the following two ways. First, we add a new parameter auction config auctionConfig.

Second, we add the following step after step 1 ("Let global be a new InterestGroupScoringScriptRunnerGlobalScope."), renumbering the later step as appropriate:

  1. Set global’s auction config to auctionConfig.

The evaluate a reporting script steps are modified in the following two ways. First, we add two new parameters: an auction config auctionConfig and an interest group or null ig.

Second, we add the following step after step 1 ("Let global be a new InterestGroupReportingScriptRunnerGlobalScope."), renumbering the later step as appropriate:

  1. Set global’s auction config to auctionConfig.

  2. Set global’s interest group to ig.

Then, we modify the invocations of the above algorithms to plumb the new parameters in:

The generate a bid algorithm is modified to add a new auction config parameter auctionConfig. Additionally, its last step is modified by adding the argument auctionConfig to the invocation of evaluating a bidding script. Further, the generate and score bids algorithm is modified by adding the argument auctionConfig to both invocations of generate a bid.

The score and rank a bid algorithm is modified by adding the argument auctionConfig to the invocation of evaluating a scoring script.

The report result algorithm is modified by passing in the arguments auctionConfig and null to the invocation of evaluate a reporting script.

The report win algorithm is modified by passing in the arguments auctionConfig and winner’s interest group to the invocation of evaluate a reporting script.

The estimated size of an interest group algorithm is modified to add the following line at the end of the sum:

  1. The length of the serialization of ig’s Private Aggregation coordinator if the field is not null.

The update interest groups steps are modified to add the following case at the end of the "Switch on key" step.

"privateAggregationConfig"
  1. If value is not a map whose keys are strings, jump to the step labeled Abort update.

  2. If value["aggregationCoordinatorOrigin"] exists:

    1. If value["aggregationCoordinatorOrigin"] is not a string, jump to the step labeled Abort update.

    2. Let aggregationCoordinator be the result of obtaining the Private Aggregation coordinator given value["aggregationCoordinatorOrigin"].

    3. If aggregationCoordinator is a DOMException, jump to the step labeled Abort update.

    4. Otherwise, set ig’s Private Aggregation coordinator to aggregationCoordinator.

12.5. New algorithms

Add the following definitions:

To process the Private Aggregation contributions for an auction given an auction config auctionConfig and a leading bid info leadingBidInfo:

  1. Let winnerOrigin be null.

  2. If leadingBidInfo’s leading bid is not null, set winnerOrigin to leadingBidInfo’s leading bid’s interest group’s owner.

  3. For each igonEventContributionCache of auctionConfig’s per-bid or seller on event contribution cache:

    1. Let origin be null.

    2. If ig is null, set origin to auctionConfig’s seller.

    3. Otherwise, set origin to ig’s owner.

    4. For each evententries of onEventContributionCache:

      1. If event is "reserved.win" or does not start with "reserved.":

        1. If origin is not winnerOrigin, return.

      2. If event is "reserved.loss" and origin is winnerOrigin, return.

      3. For each onEventEntry of entries:

        1. Let filledInContribution be the result of filling in the contribution given onEventEntry’s contribution and leadingBidInfo.

          Once protected-audience/627 is resolved, align 'filling in' logic with forDebuggingOnly.

        2. If event does not start with "reserved.":

          1. Store event, filledInContribution, onEventEntry’s debug details in the FencedFrameConfig as appropriate.

            Note: Each non-reserved event will have a different batching scope.

            Once protected-audience/616 and any successors are landed, align integration and fill in fenced frame’s report a private aggregation event.

          2. Continue.

        3. Let entry be a new contribution cache entry with the items:

          contribution

          filledInContribution

          batching scope

          onEventEntry’s batching scope

          debug scope

          onEventEntry’s debug scope

          debug details

          onEventEntry’s debug details

        4. Append entry to the contribution cache.

  4. Let sellerBatchingScope be the result of getting or creating a batching scope given auctionConfig’s seller, auctionConfig’s seller Private Aggregation coordinator, and auctionConfig.

  5. Let auctionReportBuyersDebugScope be a new debug scope.

  6. For each reportTypereportBuyerConfig of auctionConfig’s auction report buyers:

    1. For each buyerOriginbuyerOffset of auctionConfig’s auction report buyer keys:

      1. Let bucket be the sum of buyerOffset and reportBuyerConfig’s bucket.

        Handle overflow here or in validation. See protected-audience/1040.

      2. Let value be the result (a double) of switching on reportType:

        "interestGroupCount"

        The number of interest groups in the user agent's interest group set whose owner is buyerOrigin.

        "bidCount"

        The number of valid bids generated by interest groups whose owner is buyerOrigin.

        "totalGenerateBidLatency"

        The sum of execution time in milliseconds for all generateBid() calls in the auction for interest groups whose owner is buyerOrigin.

        "totalSignalsFetchLatency"

        The total time spent fetching trusted buyer signals in milliseconds, or 0 if the interest group didn’t fetch any trusted signals.

        None of the above values

        Assert: false

        Note: This enum value is validated in validate and convert auction ad config.

        More formally spec the values here.

      3. Set value to the result of multiplying reportBuyerConfig’s scale with value.

      4. Set value to the maximum of 0.0 and value.

      5. Set value to the result of converting value to an integer by truncating its fractional part.

      6. Set value to the minimum of value and 231−1.

      7. Let contribution be a new PAHistogramContribution with the items:

        bucket

        bucket

        value

        value

        filteringId

        0

        Consider allowing the filtering ID to be set here.

      8. For each ig of the user agent's interest group set whose owner is buyerOrigin:

        1. If seller capabilities don’t allow this reporting, continue.

          Align behavior with seller capabilities handling once protected-audience/966 is resolved.

        2. Let entry be a new contribution cache entry with the items:

          contribution

          contribution

          batching scope

          sellerBatchingScope

          debug scope

          auctionReportBuyersDebugScope

        3. Append entry to the contribution cache.

  7. Mark a debug scope complete given auctionReportBuyersDebugScope and auctionConfig’s auction report buyer debug details.

  8. For each (origin, aggregationCoordinator) → batchingScope of auctionConfig’s batching scope map:

    1. Process contributions for a batching scope given batchingScope, origin, "protected-audience" and null.

Verify interaction with component auctions.

Use [=map/For each=] where possible.

To get or create a batching scope given an origin origin, an aggregation coordinator aggregationCoordinator and an auction config auctionConfig, perform the following steps. They return a batching scope.

  1. Let batchingScopeMap be auctionConfig’s batching scope map.

  2. Let tuple be (origin, aggregationCoordinator).

  3. If batchingScopeMap[tuple] does not exist:

    1. Set batchingScopeMap[tuple] to a new batching scope.

    2. If aggregationCoordinator is not null, set the aggregation coordinator for a batching scope given aggregationCoordinator and batchingScopeMap[tuple].

  4. Return batchingScopeMap[tuple].

To fill in the contribution given a PAExtendedHistogramContribution contribution and a leading bid info leadingBidInfo, perform the following steps. They return a PAHistogramContribution.

  1. Let bucket be contribution["bucket"].

  2. If bucket is a PASignalValue, set bucket to the result of filling in the signal value given bucket, 2128−1 and leadingBidInfo.

  3. Let value be contribution["value"].

  4. If value is a PASignalValue, set value to the result of filling in the signal value given value, 231−1 and leadingBidInfo.

  5. Let filledInContribution be a new PAHistogramContribution with the items:

    bucket

    bucket

    value

    value

    filteringId

    contribution["filteringId"]

  6. Return filledInContribution.

To fill in the signal value given a PASignalValue value, an integer maxAllowed and a leading bid info leadingBidInfo, perform the following steps. They return an integer.

  1. Assert: value["baseValue"] is a valid signal base value.

  2. Let returnValue be the result of determining a signal’s numeric value given value["baseValue"] and leadingBidInfo.

  3. If value["scale"] exists, set returnValue to the result of multiplying value["scale"] with returnValue.

  4. Set returnValue to the result of converting returnValue to an integer by truncating its fractional part.

  5. If value["offset"] exists, set returnValue to the result of adding returnValue to value["offset"].

  6. Clamp returnValue to the range [0, maxAllowed] and return the result.

To determine a signal’s numeric value given a signal base value signalBaseValue and a leading bid info leadingBidInfo, perform the following steps. They return a double.

  1. If signalBaseValue is "winning-bid":

    1. If leadingBidInfo’s leading bid is null, return 0.

    2. Otherwise, return leadingBidInfo’s leading bid’s bid.

  2. If signalBaseValue is "highest-scoring-other-bid":

    1. If leadingBidInfo’s highest scoring other bid is null, return 0.

    2. Otherwise, return leadingBidInfo’s highest scoring other bid’s bid.

  3. If signalBaseValue is "script-run-time":

    1. Return the number of milliseconds of CPU time that the calling function (e.g. generateBid()) took to run.

  4. If signalBaseValue is "signals-fetch-time": Switch on the associated worklet function:

    generate-bid

    Return the number of milliseconds it took for the trusted bidding signals fetch to complete, or 0 if no fetch was made.

    score-ad

    Return the number of milliseconds it took for the trusted scoring signals fetch to complete or 0 if no fetch was made.

    report-result
    report-win

    Return 0.

    Consider disallowing this in the latter two worklet functions.

  5. If signalBaseValue is "bid-reject-reason":

    1. If the bid did not succeed purely because it didn’t meet the required k-anonymity threshold, return 8.

    2. Let bidRejectReason be "not-available".

    3. If the seller provided a reject reason, set bidRejectReason to that value.

    4. If bidRejectReason is:

      "not-available"

      Return 0.

      "invalid-bid"

      Return 1.

      "bid-below-auction-floor"

      Return 2.

      "pending-approval-by-exchange"

      Return 3.

      "disapproved-by-exchange"

      Return 4.

      "blocked-by-publisher"

      Return 5.

      "language-exclusions"

      Return 6.

      "category-exclusions"

      Return 7.

      None of the above values

      Assert: false

      Note: this enum value is validated in scoreAd().

      Verify this once protected-audience/627 is resolved.

      Once protected-audience/594 lands, update this mapping to align.

      Verify handling when the bid was not rejected.

      Consider disallowing this from reportWin() and reportResult().

To maybe obtain an interest group given an InterestGroupScriptRunnerGlobalScope global, perform the following steps. They return an interest group or null:

  1. Switch on global’s type:

    InterestGroupBiddingScriptRunnerGlobalScope

    Return global’s interest group.

    InterestGroupScoringScriptRunnerGlobalScope

    Return null.

    InterestGroupReportingScriptRunnerGlobalScope

    Return global’s interest group.

To obtain the Private Aggregation coordinator given a ProtectedAudiencePrivateAggregationConfig config, perform the following steps. They return an aggregation coordinator, null or a DOMException.

  1. If config["aggregationCoordinatorOrigin"] does not exist, return null.

  2. Return the result of obtaining the Private Aggregation coordinator given config["aggregationCoordinatorOrigin"].

To obtain the Private Aggregation coordinator given a USVString originString, perform the following steps. They return an aggregation coordinator or a DOMException.

  1. Let url be the result of running the URL parser on originString.

  2. If url is failure or null, return a new DOMException with name "SyntaxError".

    Consider throwing an error if the path is not empty.

  3. Let origin be url’s origin.

  4. If the result of determining if an origin is an aggregation coordinator given origin is false, return a new DOMException with name "DataError".

  5. Return origin.

13. Privacy considerations

This section is non-normative.

13.1. Cross-site information disclosure

This API lets isolated contexts with access to cross-site data (i.e. Shared Storage worklets/Protected Audience script runners) send aggregatable reports over the network.

Aggregatable reports contain encrypted high entropy cross-site information, in the form of key-value pairs (i.e. contributions to a histogram). The information embedded in the contributions is arbitrary but can include things like browsing history and other cross-site activity. The API aims to protect this information from being passed from one site to another.

13.1.1. Restricted contribution processing

The histogram contributions are not exposed directly. Instead, they are encrypted so that they can only be processed by a trusted aggregation service. This trusted aggregation service sums the values across the reports for each key and adds noise to each of these values to produce ‘summary reports’.

The output of that processing will be an aggregated, noised histogram. The service ensures that any report can not be processed multiple times. Further, information exposure is limited by contribution budgets on the user agent. In principle, this framework can support specifying a noise parameter which satisfies differential privacy.

13.1.2. Unencrypted metadata

These reports also expose a limited amount of metadata, which is not based on cross-site data. The recipient of the report may also be able to observe side-channel information such as the time when the report was sent, or IP address of the sender.

13.1.3. Protecting against leaks via the number of reports

However, the number of reports with the given metadata could expose some cross-site information. To protect against this, the API delays sending reports by a randomized amount of time to make it difficult to determine whether a report was sent or not from any particular event. In the case that a context ID is supplied or a non-default filtering ID max bytes is specified, the API makes the number of reports sent deterministic (sending 'null reports' if necessary -- each containing only a contribution with a value of 0 in the payload). Additional mitigations may also be possible in the future, e.g. adding noise to the report count.

13.1.4. Protecting against leaks via payload size

The length of the payload could additionally expose some cross-site information, namely how many contributions are included. To protect against this, the payload is padded to a fixed number of contributions.

13.1.5. Temporary debugging mechanism

The enableDebugMode() method allows for many of the protections of this API to be bypassed to ease testing and integration. Specifically, the contents of the payload, i.e. the histogram contributions, are revealed in the clear when the debug mode is enabled. Optionally, a debug key can also be set to associate the report with the calling context. In the future, this mechanism will only be available for callers that are eligible to set third-party cookies. In that case, the API caller already has the ability to communicate information cross-site.

Tie enableDebugMode() to third-party cookie eligibility. [Issue #57]

13.1.6. Privacy parameters

The amount of information exposed by this API is a product of the privacy parameters used (e.g. contribution limits and the noise distribution used in the aggregation service). While we aim to minimize the amount of information exposed, we also aim to support a wide range of use cases. The privacy parameters are left implementation-defined to allow different and evolving choices in the tradeoffs between information exposure and utility.

13.2. Clearing site data

The aggregatable report cache as well as any contribution history data stored for the consume budget if permitted algorithm contain data about a user’s web activity. As such, user controls to delete this data are required, see clearing storage.

On the other hand, the contribution cache, the debug scope map and the pre-specified report parameters map only contain short-lived data tied to particular batching scopes and debug scopes, so controls are not required.

13.3. Reporting delay concerns

Delaying sending reports after API invocation can enable side-channel leakage in some situations.

13.3.1. Cross-network reporting origin leakage

A report may be stored while the browser is connected to one network but sent while the browser is connected to a different network, potentially enabling cross-network leakage of the reporting origin.

Example: A user runs the browser with a particular browsing profile on their home network. An aggregatable report with a particular reporting origin is stored with a report time in the future. After the report time is reached, the user runs the browser with the same browsing profile on their employer’s network, at which point the browser sends the report to the reporting origin. Although the report itself may be sent over HTTPS, the reporting origin may be visible to the network administrator via DNS or the TLS client hello (which can be mitigated with ECH). Some reporting origins may be known to operate only or primarily on sensitive sites, so this could leak information about the user’s browsing activity to the user’s employer without their knowledge or consent.

Possible mitigations include:

  1. Only sending reports with a given reporting origin when the browser has already made a request to that origin on the same network: This prevents the network administrator from gaining additional information from the Private Aggregation API. However, it increases report loss and report delays, which reduces the utility of the API for the reporting origin. It might also increase the effectiveness of timing attacks, as the origin may be able to better link the report with the user’s request that allowed the report to be released.

  2. Send reports immediately: This reduces the likelihood of a report being stored and sent on different networks. However, it increases the likelihood that the reporting origin can correlate the original API invocation to the report being sent, which weakens the privacy controls of the API, see Protecting against leaks via the number of reports.

  3. Use a trusted proxy server to send reports: This effectively moves the reporting origin into the report body, so only the proxy server would be visible to the network administrator.

  4. Require DNS over HTTPS: This effectively hides the reporting origin from the network administrator, but is likely impractical to enforce and is itself perhaps circumventable by the network administrator, e.g. by monitoring IP addresses instead.

13.3.2. User-presence tracking

The browser only tries to send reports while it is running and while it has internet connectivity (even without an explicit check for connectivity, naturally the report will fail to be sent if there is none), so receiving or not receiving a (serialized) aggregatable report at the original report time leaks information about the user’s presence. Additionally, because the report request inherently includes an IP address, this could reveal the user’s IP-derived location to the reporting origin, including at-home vs. at-work or approximate real-world geolocation, or reveal patterns in the user’s browsing activity.

Possible mitigations include:

  1. Send reports immediately: This effectively eliminates the presence tracking, as the original request made to the reporting origin is in close temporal proximity to the report request. However, it increases the likelihood that the reporting origin can correlate the original API invocation to the report being sent, which weakens the privacy controls of the API, see Protecting against leaks via the number of reports.

  2. Send reports immediately to a trusted proxy server, which would itself apply additional delay: This would effectively hide both the user’s IP address and their online-offline presence from the reporting origin.

14. Security considerations

This section is non-normative.

14.1. Same-origin policy

Writes to the aggregatable report cache, contribution cache, debug scope map and pre-specified report parameters map are attributed to the reporting origin and the data included in any report with a given reporting origin are generated with only data from that origin.

One notable exception is the consume budget if permitted algorithm which is implementation-defined and can consider contribution history from other origins. For example, the algorithm could consider all history from a particular site. This would be an explicit relaxation of the same-origin policy as multiple origins would be able to influence the API’s behavior. One particular risk of these kinds of shared limits is the introduction of denial of service attacks, where a group of origins could collude to intentionally consume all available budget, causing subsequent origins to be unable to access the API. This trades off security for privacy, in that the limits are there to reduce the efficacy of many origins colluding together to violate privacy. However, this security risk is lessened if the set of origins limited are all same site. User agents should consider these tradeoffs when choosing the consume budget if permitted algorithm.

14.2. Protecting the histogram contributions

As discussed above, the processing of histogram contributions is limited to protect privacy. This limitation relies on only the trusted aggregation service being able to access the unencrypted histogram contributions.

To ensure this, this API uses HPKE, a modern encryption specification. Additionally, each user agent is encouraged to require regular key rotation by the aggregation service. This limits the amount of data encrypted with the same key and thus the amount of vulnerable data in the case of a key being compromised.

While not specified here, each user agent is strongly encouraged to consider the security of any aggregation service design before allowing its public keys to be returned by obtain the public key for encryption.

Conformance

Document conventions

Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

Examples in this specification are introduced with the words “for example” or are set apart from the normative text with class="example", like this:

This is an example of an informative example.

Informative notes begin with the word “Note” and are set apart from the normative text with class="note", like this:

Note, this is an informative note.

Index

Terms defined by this specification

Terms defined by reference

References

Normative References

[ATTRIBUTION-REPORTING-API]
Attribution Reporting. Draft Community Group Report. URL: https://wicg.github.io/attribution-reporting-api/
[DOM]
Anne van Kesteren. DOM Standard. Living Standard. URL: https://dom.spec.whatwg.org/
[ECMASCRIPT]
ECMAScript Language Specification. URL: https://tc39.es/ecma262/multipage/
[ENCODING]
Anne van Kesteren. Encoding Standard. Living Standard. URL: https://encoding.spec.whatwg.org/
[FENCED-FRAME]
Fenced Frame. Draft Community Group Report. URL: https://wicg.github.io/fenced-frame/
[FETCH]
Anne van Kesteren. Fetch Standard. Living Standard. URL: https://fetch.spec.whatwg.org/
[HR-TIME-3]
Yoav Weiss. High Resolution Time. URL: https://w3c.github.io/hr-time/
[HTML]
Anne van Kesteren; et al. HTML Standard. Living Standard. URL: https://html.spec.whatwg.org/multipage/
[INFRA]
Anne van Kesteren; Domenic Denicola. Infra Standard. Living Standard. URL: https://infra.spec.whatwg.org/
[PERMISSIONS-POLICY-1]
Ian Clelland. Permissions Policy. URL: https://w3c.github.io/webappsec-permissions-policy/
[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Best Current Practice. URL: https://datatracker.ietf.org/doc/html/rfc2119
[RFC8949]
C. Bormann; P. Hoffman. Concise Binary Object Representation (CBOR). December 2020. Internet Standard. URL: https://www.rfc-editor.org/rfc/rfc8949
[RFC9180]
R. Barnes; et al. Hybrid Public Key Encryption. February 2022. Informational. URL: https://www.rfc-editor.org/rfc/rfc9180
[SECURE-CONTEXTS]
Mike West. Secure Contexts. URL: https://w3c.github.io/webappsec-secure-contexts/
[SHARED-STORAGE]
Shared Storage API. Draft Community Group Report. URL: https://wicg.github.io/shared-storage/
[TURTLEDOVE]
Protected Audience (formerly FLEDGE). Draft Community Group Report. URL: https://wicg.github.io/turtledove/
[URL]
Anne van Kesteren. URL Standard. Living Standard. URL: https://url.spec.whatwg.org/
[W3C-PROCESS]
Elika J. Etemad (fantasai); Florian Rivoal. W3C Process Document. 2 November 2021. URL: https://www.w3.org/Consortium/Process/
[WebCryptoAPI]
Mark Watson. Web Cryptography API. URL: https://w3c.github.io/webcrypto/
[WebDriver]
Simon Stewart; David Burns. WebDriver. URL: https://w3c.github.io/webdriver/
[WEBDRIVER2]
Simon Stewart; David Burns. WebDriver. URL: https://w3c.github.io/webdriver/
[WEBIDL]
Edgar Chen; Timothy Gu. Web IDL Standard. Living Standard. URL: https://webidl.spec.whatwg.org/

Informative References

[RFC8484]
P. Hoffman; P. McManus. DNS Queries over HTTPS (DoH). October 2018. Proposed Standard. URL: https://www.rfc-editor.org/rfc/rfc8484
[RFC8615]
M. Nottingham. Well-Known Uniform Resource Identifiers (URIs). May 2019. Proposed Standard. URL: https://www.rfc-editor.org/rfc/rfc8615

IDL Index

[Exposed=(InterestGroupScriptRunnerGlobalScope,SharedStorageWorklet),
 SecureContext]
interface PrivateAggregation {
  undefined contributeToHistogram(PAHistogramContribution contribution);
  undefined enableDebugMode(optional PADebugModeOptions options = {});
};

dictionary PAHistogramContribution {
  required bigint bucket;
  required long value;
  bigint filteringId = 0;
};

dictionary PADebugModeOptions {
  required bigint debugKey;
};

partial interface SharedStorageWorkletGlobalScope {
  readonly attribute PrivateAggregation privateAggregation;
};

dictionary SharedStoragePrivateAggregationConfig {
  USVString aggregationCoordinatorOrigin;
  USVString contextId;
  [EnforceRange] unsigned long long filteringIdMaxBytes;
};

partial dictionary SharedStorageRunOperationMethodOptions {
  SharedStoragePrivateAggregationConfig privateAggregationConfig;
};

partial interface InterestGroupScriptRunnerGlobalScope {
  readonly attribute PrivateAggregation privateAggregation;
};

dictionary PASignalValue {
  required DOMString baseValue;
  double scale;
  (bigint or long) offset;
};

dictionary PAExtendedHistogramContribution {
  required (PASignalValue or bigint) bucket;
  required (PASignalValue or long) value;
  bigint filteringId = 0;
};

[Exposed=InterestGroupScriptRunnerGlobalScope, SecureContext]
partial interface PrivateAggregation {
  undefined contributeToHistogramOnEvent(
      DOMString event, PAExtendedHistogramContribution contribution);
};

dictionary AuctionReportBuyersConfig {
  required bigint bucket;
  required double scale;
};

dictionary AuctionReportBuyerDebugModeConfig {
  boolean enabled = false;

  // Must only be provided if `enabled` is true.
  bigint? debugKey;
};

partial dictionary AuctionAdConfig {
  sequence<bigint> auctionReportBuyerKeys;
  record<DOMString, AuctionReportBuyersConfig> auctionReportBuyers;
  AuctionReportBuyerDebugModeConfig auctionReportBuyerDebugModeConfig;
};

dictionary ProtectedAudiencePrivateAggregationConfig {
  USVString aggregationCoordinatorOrigin;
};

partial dictionary AuctionAdConfig {
  ProtectedAudiencePrivateAggregationConfig privateAggregationConfig;
};

partial dictionary AuctionAdInterestGroup {
  ProtectedAudiencePrivateAggregationConfig privateAggregationConfig;
};

Issues Index

Per the Web Platform Design Principles, we should consider switching long to [EnforceRange] long long.
enableDebugMode(options)'s argument should not have a default value of {}. Alternatively, debugKey should not be required in PADebugModeOptions.
Ensure errors are of an appropriate type, e.g. InvalidAccessError is deprecated.
Consider accepting an array of contributions. [Issue #44]
Ensure errors are of an appropriate type, e.g. InvalidAccessError is deprecated.
Unique internal value is not an exported definition. See infra/583.
Consider switching to the suitable origin concept used by the Attribution Reporting API here and elsewhere.
Move other structures to be defined inline instead of via a header. Consider also removing all the subheadings.
Elsewhere, link to definition when using user agent.
Consider adding more constants.
This would ideally be replaced by a more descriptive algorithm in Infra. See infra/201.
Consider improving developer ergonomics here (e.g. a way to detect this case).
Ensure errors are of an appropriate type, e.g. InvalidAccessError is deprecated.
Elsewhere, surround algorithms in a <div algorithm> block to match, and add styling for all algorithms per bikeshed/1472.
Do we have to use the queue a task algorithm here?
Do we need to queue this task?
Register this well-known directory. [Issue #67]
Specify this in terms of fetch. Add details about which encryption standards to use, length requirements, etc.
This should be moved to the Shared Storage spec. [Issue #43]
Go through all monkey patches and ensure every definition (including) structures that is needed is exported.
Consider throwing an error if the path is not empty.
Once shared-storage/88 is resolved, align the above monkey patches with how keepAlive is handled at operation completion.
Consider adding an early return here (and equivalently for Protected Audience) if the permissions policy check is made first.
Once shared-storage/89 is resolved, align the above monkey patch with how access to sharedStorage is prevented in SharedStorageWorkletGlobalScopes until addModule()'s initial execution is complete.
This should be moved to the Protected Audience spec, along with any other Protected Audience-specific details. [Issue #43]
Do we want to align naming with implementation?
Make the error type consistent with contributeToHistogram(contribution).
Make the error types on validation issues here and above consistent with contributeToHistogram(contribution).
Ensure errors are of an appropriate type, e.g. InvalidAccessError is deprecated.
Consider accepting an array of contributions. [Issue #44]
Consider replacing the strings above with specific enum types.
Can this value be used in reportWin() or reportResult()?
Check behavior when an origin is repeated in interestGroupBuyers.
Should these strings be dash delimited?
Consider validating the case where the bucket used (after summing) is too large. Currently, the implementation appears to overflow. See protected-audience/1040.
Make all map indexing links (throughout the spec) where possible, i.e. matching this section.
Once protected-audience/615 is resolved, align the above monkey patch with how access to other functions is prevented in InterestGroupScriptRunnerGlobalScopes until the script’s initial execution is complete.
Once protected-audience/627 is resolved, align 'filling in' logic with forDebuggingOnly.
Once protected-audience/616 and any successors are landed, align integration and fill in fenced frame’s report a private aggregation event.
Handle overflow here or in validation. See protected-audience/1040.
More formally spec the values here.
Consider allowing the filtering ID to be set here.
Align behavior with seller capabilities handling once protected-audience/966 is resolved.
Verify interaction with component auctions.
Use [=map/For each=] where possible.
Consider disallowing this in the latter two worklet functions.
Verify this once protected-audience/627 is resolved.
Once protected-audience/594 lands, update this mapping to align.
Verify handling when the bid was not rejected.
Consider disallowing this from reportWin() and reportResult().
Consider throwing an error if the path is not empty.
Tie enableDebugMode() to third-party cookie eligibility. [Issue #57]