--- url: /TestFlight/guide/AppIntent.md --- # AppIntent `AppIntentManager` is used to **register and manage AppIntents** in the **Scripting** app. It serves as the core mechanism for executing script logic behind controls in **Widgets**, **Live Activities**, and **ControlWidgets**. All `AppIntent`s **must** be defined in the `app_intents.tsx` file. When an intent is executed, the script runs in the `"app_intents"` environment (`Script.env === "app_intents"`). Once registered, these intents can be triggered by **Button** and **Toggle** controls within Widgets, Live Activities, or ControlWidgets, allowing users to define interactive behavior via script. *** ## 1. Type Definitions ### `AppIntent` Represents a concrete intent instance with parameters and metadata. | Property | Type | Description | | ---------- | ------------------- | -------------------------------------------------------------------------- | | `script` | `string` | The internal script path. Automatically generated by the system. | | `name` | `string` | The name of the AppIntent. Must be unique. | | `protocol` | `AppIntentProtocol` | The protocol the intent conforms to (e.g., general, audio, Live Activity). | | `params` | `T` | The parameters to be passed when the intent is executed. | *** ### `AppIntentFactory` A **factory function** that creates an `AppIntent` instance with specified parameters. ```ts type AppIntentFactory = (params: T) => AppIntent ``` *** ### `AppIntentPerform` A function that handles intent execution logic asynchronously. ```ts type AppIntentPerform = (params: T) => Promise ``` *** ### `AppIntentProtocol` An enumeration that defines the behavior type of the intent. | Enum Value | Description | | -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `AppIntent` (0) | A general-purpose AppIntent for typical operations. | | `AudioPlaybackIntent` (1) | An intent that plays, pauses, or otherwise modifies audio playback. | | `AudioRecordingIntent` (2) | _(iOS 18.0+)_ An intent that starts, stops, or modifies audio recording. **Note**: On iOS/iPadOS, when using the `AudioRecordingIntent` protocol, you must start a **Live Activity** at the beginning of the recording and keep it active for the entire session. If you don't, the recording will be automatically stopped. | | `LiveActivityIntent` (3) | An intent that starts, pauses, or modifies a Live Activity. | *** ## 2. `AppIntentManager` Class ### `AppIntentManager.register(options): AppIntentFactory` Registers a new `AppIntent` by specifying its name, protocol, and perform logic. When a control (e.g., Button or Toggle) triggers the intent, the associated `perform` function is called. ```ts static register(options: { name: string; protocol: AppIntentProtocol; perform: AppIntentPerform; }): AppIntentFactory ``` #### Parameters: | Property | Type | Description | | ---------- | --------------------- | ------------------------------------------------------------------------------------------------------------------ | | `name` | `string` | A unique identifier for the AppIntent. | | `protocol` | `AppIntentProtocol` | The protocol this intent implements. | | `perform` | `AppIntentPerform` | The asynchronous function executed when the intent is triggered. The `params` argument is passed from the control. | #### Returns: - **`AppIntentFactory`**: A factory function that creates an `AppIntent` instance with the specified parameters. #### Example: ```tsx // app_intents.tsx export const ToggleDoorIntent = AppIntentManager.register({ name: "ToggleDoorIntent", protocol: AppIntentProtocol.AppIntent, perform: async ({ id, newState }: { id: string; newState: boolean }) => { // Custom logic: toggle the door state await setDoorState(id, newState) // Notify UI to refresh toggle state ControlWidget.reloadToggles() } }) ``` In a control view file (e.g., `control_widget_toggle.tsx`): ```tsx ControlWidget.present( ) ``` In a widget file (`widget.tsx`): ```tsx ``` *** ## 3. Execution Environment All AppIntents registered via `AppIntentManager` are executed in the `"app_intents"` environment. This allows safe use of APIs suitable for background execution, such as: - Fetching data from the network - Controlling Live Activities - Triggering control view refreshes *** ## 4. Best Practices 1. **Centralized Definitions**: All AppIntents **must** be defined in `app_intents.tsx` for discoverability and maintainability. 2. **Strong Typing**: Define explicit parameter types `T` for both `perform` and control usage to benefit from type checking and autocomplete. 3. **Choose the Right Protocol**: - General operation → `AppIntent` - Audio playback → `AudioPlaybackIntent` - Audio recording → `AudioRecordingIntent` _(requires iOS 18+, with Live Activity)_ - Live Activity control → `LiveActivityIntent` 4. **Trigger UI Updates**: If the intent modifies a UI state (e.g., toggle), call: - `ControlWidget.reloadButtons()` - `ControlWidget.reloadToggles()` - `Widget.reloadAll()` depending on where the control is hosted. --- url: /TestFlight/guide/Assistant/Assistant Conversation APIs.md --- # Assistant Conversation APIs The Conversation APIs are used to **start, control, and present a system-hosted Assistant chat session**. A conversation corresponds to a **fully managed chat page**, where Scripting handles the UI, streaming output, provider selection, and message lifecycle. Key differences from other Assistant APIs: - Conversation APIs are designed for **interactive chat experiences** - UI, streaming, and message handling are managed by the system - Developers control **when the conversation starts, ends, and is shown** *** ## Conversation Lifecycle A typical conversation follows this lifecycle: 1. `startConversation` — create a conversation (optionally auto-start) 2. `present` — display the Assistant chat page 3. User interacts with the Assistant 4. `dismiss` — temporarily hide the chat page (conversation continues) 5. `present` — show the same conversation again 6. `stopConversation` — terminate the conversation and release resources Important rules: - **Only one active conversation can exist at a time** - Calling `startConversation` while a conversation is active throws an error - Calling `stopConversation` automatically calls `dismiss` *** ## startConversation ### API Definition ```ts function startConversation(options: { message: string images?: UIImage[] autoStart?: boolean systemPrompt?: string modelId?: string provider?: Provider }): Promise ``` *** ### Parameters #### options.message - Type: `string` - Required - The **initial user message** of the conversation - Equivalent to the first user input in the chat UI *** #### options.images (optional) - Type: `UIImage[]` - Sent together with the initial message - Common use cases: - Image analysis - Starting a conversation from a photo or screenshot *** #### options.autoStart (optional) - Type: `boolean` - Default: `false` Behavior: - `true` - The assistant immediately starts generating a reply - `false` - The conversation is created but not sent automatically - Typically used when the user should press “Send” manually *** #### options.systemPrompt (optional) - Type: `string` Behavior: - If omitted: - The built-in Scripting Assistant system prompt is used - Assistant Tools are available - If provided: - Fully replaces the default system prompt - **Assistant Tools are disabled** Typical use cases: - Creating a highly customized chat role - Running the model without any tool access *** #### options.modelId (optional) - Type: `string` - Specifies the model to use for this conversation - Users may still change the model in the chat UI (if allowed) *** #### options.provider (optional) - Type: `Provider` - Specifies the default provider for the conversation - Users may change the provider in the chat UI (if allowed) *** ### Return Value ```ts Promise ``` - Resolves when the conversation is successfully created - Rejects if a conversation already exists *** ## present ### API Definition ```ts function present(): Promise ``` *** ### Behavior - Presents the Assistant chat page for the current conversation - If the page is already presented, calling this has no effect - Can be called: - After `startConversation` - After `dismiss` to re-present the same conversation *** ### Return Value ```ts Promise ``` - Resolves when the chat page is dismissed by the user *** ## dismiss ### API Definition ```ts function dismiss(): Promise ``` *** ### Behavior - Dismisses the Assistant chat page - **Does not stop the conversation** - Conversation state and history are preserved Typical use cases: - Temporarily hiding the chat UI - Navigating to another page or task *** ### Return Value ```ts Promise ``` *** ## stopConversation ### API Definition ```ts function stopConversation(): Promise ``` *** ### Behavior - Fully terminates the current conversation - Automatically calls `dismiss` - Cleans up conversation state and resources - After calling this, a new conversation may be started *** ### Return Value ```ts Promise ``` *** ## Conversation State Flags ### Assistant.isAvailable ```ts const isAvailable: boolean ``` - Indicates whether the current user has access to the Assistant - If `false`, all Conversation APIs are unavailable *** ### Assistant.isPresented ```ts const isPresented: boolean ``` - Indicates whether the Assistant chat page is currently presented *** ### Assistant.hasActiveConversation ```ts const hasActiveConversation: boolean ``` - Indicates whether there is an active conversation - Commonly used to guard against duplicate `startConversation` calls *** ## Examples ### Example 1: Typical usage ```ts await Assistant.startConversation({ message: "Help me summarize this article.", autoStart: true }) await Assistant.present() ``` *** ### Example 2: Create a conversation without auto-sending ```ts await Assistant.startConversation({ message: "Let's discuss system architecture design.", autoStart: false }) await Assistant.present() // User manually presses Send in the UI ``` *** ### Example 3: Dismiss and re-present the same conversation ```ts await Assistant.startConversation({ message: "Analyze this image.", images: [image], autoStart: true }) await Assistant.present() await Assistant.dismiss() // Later, re-present the same conversation await Assistant.present() ``` *** ### Example 4: Stop the current conversation and start a new one ```ts if (Assistant.hasActiveConversation) { await Assistant.stopConversation() } await Assistant.startConversation({ message: "Start a new topic.", autoStart: true }) await Assistant.present() ``` *** ## Best Practices - Treat Conversation APIs as a **managed chat UI** - Do not mix Conversation APIs with `requestStreaming` in the same flow - Always check `hasActiveConversation` before calling `startConversation` - For one-shot or data-oriented tasks, prefer: - `requestStructuredData` - `requestStreaming` - Use Conversation APIs when continuous user interaction is required *** ## Design Boundaries - Conversation APIs are not suitable for headless or background tasks - Not intended for fully automated workflows - Not ideal when you need strict control over prompts, tokens, or output format --- url: /TestFlight/guide/Assistant/Assistant Quick Start.md --- # Assistant Quick Start The Assistant API in Scripting provides three distinct capabilities, each designed for a different type of use case: **structured data**, **streaming output**, and **interactive conversations**. Before choosing an API, first decide **what kind of result you need**. *** ## Assistant API Overview | Category | Main APIs | Best For | | ---------------- | ---------------------------------------------------------------- | -------------------------------- | | Structured Data | `requestStructuredData` | Extracting predictable JSON data | | Streaming Output | `requestStreaming` | Real-time text generation | | Conversations | `startConversation` / `present` / `dismiss` / `stopConversation` | Fully managed chat UI | *** ## requestStructuredData **Purpose** Requests **strictly structured JSON output** that conforms to a provided schema. **Best suited for** - Parsing receipts, invoices, and bills - Extracting fields from natural language - Generating configuration or rule objects - Any output that must be consumed by program logic **Key characteristics** - Stable and predictable output - No streaming or incremental updates - Ideal for background or headless scenarios **In one sentence** > If you need **data**, use `requestStructuredData`. *** ## requestStreaming **Purpose** Requests **streaming output**, allowing you to receive content incrementally as the model generates it. **Best suited for** - Typing-effect UI - Long-form content generation - Low-latency user feedback **Key characteristics** - Emits text, reasoning, and usage chunks - Can be rendered progressively - Output is not guaranteed to be structured **In one sentence** > If you need **real-time output**, use `requestStreaming`. *** ## Conversation APIs **Related methods** - `startConversation` - `present` - `dismiss` - `stopConversation` **Purpose** Creates and presents a **system-hosted Assistant chat experience**. **Best suited for** - ChatGPT-style interactions - Multi-turn conversations - Scenarios where the system manages UI, streaming, and provider switching **Key characteristics** - Built-in chat UI - Streaming handled automatically - Only one active conversation at a time **In one sentence** > If you need a **full chat experience**, use the Conversation APIs. *** ## How to Choose the Right API ### Common Scenarios - **Parse a receipt →** `requestStructuredData` - **Show AI writing text live →** `requestStreaming` - **Open a chat interface for users →** Conversation APIs - **No UI, just results →** `requestStructuredData` or `requestStreaming` - **Let the system manage the chat UI →** Conversation APIs *** ## Minimal Examples ### Structured Data ```ts const result = await Assistant.requestStructuredData(...) ``` *** ### Streaming Output ```ts const stream = await Assistant.requestStreaming(...) for await (const chunk of stream) { // handle chunk } ``` *** ### Conversation ```ts await Assistant.startConversation({ message: "Hello", autoStart: true }) await Assistant.present() ``` *** ## Usage Tips - Do not mix Conversation APIs with `requestStreaming` in the same flow - Prefer `requestStructuredData` whenever output must be consumed as data - Use streaming or conversations for presentation-focused scenarios *** ## Next Steps For deeper details, refer to: - `requestStructuredData` – detailed schema-driven data extraction - `requestStreaming` – streaming behavior and chunk handling - Conversation APIs – lifecycle and interaction patterns --- url: /TestFlight/guide/Assistant/requestStreaming.md --- # requestStreaming `requestStreaming` requests a **streaming response** from the Assistant. Instead of returning a complete result at once, the Assistant emits **chunks incrementally** as the model generates output. This enables: - Real-time UI updates (typing effect) - Low-latency handling of long responses - Progressive rendering of results - Streaming logs and intermediate output handling The API returns a **`ReadableStream`**, which can be consumed using `for await ... of`. *** ## API Definition ```ts function requestStreaming(options: { systemPrompt?: string | null messages: MessageItem | MessageItem[] provider?: Provider modelId?: string }): Promise> ``` *** ## Parameters ### options.systemPrompt (optional) - Type: `string | null` - Specifies the system prompt for this request. - If omitted: - The default Assistant system prompt is used. - If provided: - It **fully replaces** the default system prompt. - Assistant Tools are **not available**. Typical use cases: - Defining a strict role (e.g. reviewer, translator, summarizer) - Enforcing output tone or behavior - Running the model without built-in tools *** ### options.messages - Type: `MessageItem | MessageItem[]` - Required - Represents the conversation context sent to the model. #### MessageItem ```ts type MessageItem = { role: "user" | "assistant" content: MessageContent | MessageContent[] } ``` - `role` - `"user"`: user input - `"assistant"`: previous assistant messages (for context) *** ### MessageContent Types #### Text ```ts type MessageTextContent = | string | { type: "text"; content: string } ``` *** #### Image ```ts type MessageImageContent = { type: "image" content: string // data:image/...;base64,... } ``` *** #### Document ```ts type MessageDocumentContent = { type: "document" content: { mediaType: string data: string // base64 } } ``` *** ### options.provider (optional) - Type: `Provider` - Specifies the AI provider. - If omitted, the currently configured default provider is used. - Supported values: - `"openai"` - `"gemini"` - `"anthropic"` - `"deepseek"` - `"openrouter"` - `{ custom: string }` *** ### options.modelId (optional) - Type: `string` - Specifies the model ID. - Must match a model actually supported by the selected provider. - If omitted, the provider’s default model is used. *** ## Return Value ```ts Promise> ``` Once resolved, you receive a stream that can be consumed asynchronously. *** ## StreamChunk Types The stream may emit the following chunk types. *** ### StreamTextChunk ```ts type StreamTextChunk = { type: "text" content: string } ``` - Represents user-visible generated text. - Multiple chunks concatenated form the final response. *** ### StreamReasoningChunk ```ts type StreamReasoningChunk = { type: "reasoning" content: string } ``` - Represents intermediate reasoning produced by the model. - Availability and granularity depend on the provider and model. *** ### StreamUsageChunk ```ts type StreamUsageChunk = { type: "usage" content: { totalCost: number | null cacheReadTokens: number | null cacheWriteTokens: number | null inputTokens: number outputTokens: number } } ``` Notes: - Typically emitted once near the end of the stream. - Some providers may omit certain fields. - `totalCost` may be `null` if the provider does not expose pricing data. *** ## Examples ### Example 1: Basic streaming request ```ts const stream = await Assistant.requestStreaming({ messages: { role: "user", content: "Tell me a short science fiction story." }, provider: "openai" }) let result = "" for await (const chunk of stream) { if (chunk.type === "text") { result += chunk.content console.log(chunk.content) } } ``` *** ### Example 2: Handling text, reasoning, and usage separately ```ts const stream = await Assistant.requestStreaming({ systemPrompt: "You are a precise technical writing assistant.", messages: [ { role: "user", content: "Explain what HTTP/3 is." } ] }) let answer = "" let reasoningLog = "" let usage = null for await (const chunk of stream) { switch (chunk.type) { case "text": answer += chunk.content break case "reasoning": reasoningLog += chunk.content break case "usage": usage = chunk.content break } } console.log(answer) console.log(usage) ``` *** ### Example 3: Streaming with document input ```ts const stream = await Assistant.requestStreaming({ messages: [ { role: "user", content: [ { type: "text", content: "Summarize the key points of this document." }, { type: "document", content: { mediaType: "application/pdf", data: "JVBERi0xLjQKJcfs..." } } ] } ], provider: "anthropic" }) for await (const chunk of stream) { if (chunk.type === "text") { console.log(chunk.content) } } ``` *** ## Usage Notes and Best Practices - Streams must be consumed **sequentially**; do not read concurrently. - For UI scenarios: - Render `text` chunks immediately. - Keep `reasoning` for debugging or developer modes. - Process `usage` after completion. - If you no longer need the output, stop consuming the stream to avoid unnecessary cost. - Not all providers/models emit `reasoning` or `usage`. - Do not assume a chunk represents a complete sentence; chunk sizes vary. --- url: /TestFlight/guide/Assistant/requestStructuredData.md --- # requestStructuredData `requestStructuredData` requests **structured JSON output** from the assistant that conforms to a provided JSON schema. This API is designed for workflows where you want a predictable, programmatically usable result rather than free-form text. Common use cases include: - Extracting structured fields from natural language - Parsing invoices, receipts, and tickets - Generating configuration objects - Normalizing data across different AI providers/models *** ## Supported JSON Schema Types Scripting defines a lightweight schema structure with three building blocks. ### Primitive ```ts type JSONSchemaPrimitive = { type: "string" | "number" | "boolean" required?: boolean description: string } ``` *** ### Object ```ts type JSONSchemaObject = { type: "object" properties: Record required?: boolean description: string } ``` *** ### Array ```ts type JSONSchemaArray = { type: "array" items: JSONSchemaType required?: boolean description: string } ``` *** ## API Signatures ### Without images ```ts function requestStructuredData( prompt: string, schema: JSONSchemaArray | JSONSchemaObject, options?: { provider: Provider modelId?: string } ): Promise ``` ### With images ```ts function requestStructuredData( prompt: string, images: string[], schema: JSONSchemaArray | JSONSchemaObject, options?: { provider: Provider modelId?: string } ): Promise ``` *** ## Parameters ### prompt - Type: `string` - Required - The instruction to the model describing what to extract or generate. - For best reliability, explicitly specify: - expected formats (e.g., ISO date) - currency rules - how to handle missing fields ### images (optional) - Type: `string[]` - Each item must be a **data URI**, e.g. `data:image/png;base64,...` - Not all providers/models support images. - Avoid passing too many images to reduce failure risk. ### schema - Type: `JSONSchemaArray | JSONSchemaObject` - Required - Defines the **only acceptable** JSON structure for the response. - Every field should have a clear `description` to guide the model. ### options.provider - Type: `Provider` - Optional (uses the default configured provider if omitted) - Supported: - `"openai" | "gemini" | "anthropic" | "deepseek" | "openrouter" | { custom: string }` ### options.modelId (optional) - Type: `string` - Must match a model actually supported by the chosen provider. - If omitted, Scripting uses the provider’s default model. *** ## Return Value ```ts Promise ``` - `R` is the generic type you provide. - The resolved value is expected to match your schema. - The promise rejects if the assistant cannot return a valid structured result. *** ## Examples ### Example 1: Parse a receipt/bill into line items (time + amount) This example asks the assistant to analyze a textual receipt and extract: - receipt time (`purchasedAt`) - line items (`items[]`) - item name - item time (if present; otherwise null) - amount - total amount ```ts type ReceiptItem = { name: string time: string | null amount: number } type ReceiptParsed = { purchasedAt: string | null currency: string | null items: ReceiptItem[] total: number | null } const receiptText = ` Star Coffee 2026-01-08 14:23 Latte (Large) $5.50 Blueberry Muffin $3.20 Tax $0.79 Total $9.49 ` const parsed = await Assistant.requestStructuredData( [ "Analyze the receipt text below and extract:", "- purchasedAt: the purchase date/time in ISO-8601 if possible", "- currency: currency code if you can infer it (otherwise null)", "- items: only actual purchasable items (exclude tax/total lines)", " - name: item name", " - time: item-level time if present, otherwise null", " - amount: numeric amount", "- total: numeric total if present, otherwise null", "", "Receipt:", receiptText ].join("\n"), { type: "object", description: "Parsed receipt content", properties: { purchasedAt: { type: "string", description: "Purchase date/time in ISO-8601 format if available, otherwise an empty string" }, currency: { type: "string", description: "Currency code like USD/EUR/CNY if inferable, otherwise an empty string" }, items: { type: "array", description: "Purchased line items (exclude tax/total/subtotal/service fee lines)", items: { type: "object", description: "A single purchased item line", properties: { name: { type: "string", description: "Item name" }, time: { type: "string", description: "Item-level time in ISO-8601 if available, otherwise an empty string" }, amount: { type: "number", description: "Item amount as a number" } } } }, total: { type: "number", description: "Total amount if present, otherwise -1" } } }, { provider: "openai" } ) // Post-processing suggestion: // Treat "" as null for purchasedAt/currency/time, and -1 as null for total. console.log(parsed) ``` *** ### Example 2: Generate an array ```ts type Expense = { name: string amount: number } const expenses = await Assistant.requestStructuredData( "List three common daily expenses with estimated amounts.", { type: "array", description: "A list of expenses", items: { type: "object", description: "A single expense item", properties: { name: { type: "string", description: "Expense name" }, amount: { type: "number", description: "Estimated amount" } } } }, { provider: "gemini" } ) ``` *** ### Example 3: Use images + schema ```ts type ImageSummary = { description: string containsText: boolean } const summary = await Assistant.requestStructuredData( "Analyze the image and summarize the main content.", ["data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD..."], { type: "object", description: "Image analysis result", properties: { description: { type: "string", description: "What the image shows" }, containsText: { type: "boolean", description: "Whether readable text exists" } } }, { provider: "openai" } ) ``` *** ## Best Practices - Make the schema explicit and descriptive; ambiguous schemas lead to unstable results. - Prefer `requestStructuredData` over parsing free-form text when your output is used by program logic. - For business-critical extraction (e.g., finance/receipts), add strict formatting rules in `prompt`. --- url: /TestFlight/guide/AssistantTool/Assistant Tool With User Approval.md --- # Assistant Tool With User Approval The approval-required AssistantTools typically involve **privacy, permissions, data modification, or irreversible actions**, and therefore must obtain explicit user consent before execution. *** ## When Approval Is Required An AssistantTool **must** use the approval model if it meets any of the following criteria: - Accesses user-private data (location, photos, contacts, calendar, health data, etc.) - Triggers system permission prompts - Modifies, deletes, or batch-writes user files - Produces long-lasting or irreversible effects - Requires the user to understand and confirm what will happen before execution The purpose of approval is **informed consent**, not blocking functionality. *** ## `assistant_tool.json` Configuration Approval-required tools must explicitly declare approval behavior: ```json { "displayName": "Request Current Location", "id": "request_current_location", "description": "Requests the user's current location one time.", "icon": "location.fill", "color": "systemBlue", "parameters": [], "requireApproval": true, "autoApprove": true, "scriptEditorOnly": false } ``` ### Key Fields - `requireApproval: true` Indicates that the tool must enter the approval flow before execution (either manual approval or auto-approval). - `autoApprove` (**tool-level auto-approval permission**) `autoApprove` determines **whether this specific tool is allowed to be auto-approved**. Auto-approval occurs **only when both conditions are met**: 1. The user has enabled _auto-approve tools_ in the **chat preset** 2. The tool itself declares `autoApprove: true` As a result: - User enables auto-approve, tool sets `autoApprove: false` → **manual approval is still required** - Tool sets `autoApprove: true`, user has not enabled auto-approve → **manual approval is still required** - User enables auto-approve **and** tool sets `autoApprove: true` → **the tool is automatically approved and executed** - `scriptEditorOnly` Determines whether the tool is restricted to the script editor context. *** ## The Two-Phase Approval Model Approval-required AssistantTools always follow a **two-phase model**: 1. **Approval Request Phase** Generates the approval UI (or explanatory text in auto-approval cases). 2. **Execute-With-Approval Phase** Executes the actual logic after approval has been granted (manually or automatically). These phases are implemented using two separate registration APIs. *** ## Registering the Approval Request Function ### Registration ```ts AssistantTool.registerApprovalRequest

(requestFn) ``` ### Function Signature ```ts type AssistantToolApprovalRequestFn

= ( params: P, scriptEditorProvider?: ScriptEditorProvider ) => Promise<{ title?: string message: string previewButton?: { label: string action: () => void } primaryButtonLabel?: string secondaryButtonLabel?: string }> ``` *** ### Approval Request Design Principles - The `message` must clearly explain **what will happen** and **why approval is required** - Use user-friendly language, not implementation details - **No side effects must occur in this phase** - All real operations must be performed in the execute phase *** ### Basic Example ```ts const approvalRequest: AssistantToolApprovalRequestFn<{}> = async () => { return { message: "The assistant wants to request your current location.", primaryButtonLabel: "Allow", secondaryButtonLabel: "Cancel" } } AssistantTool.registerApprovalRequest(approvalRequest) ``` *** ## Using the Preview Button The `previewButton` allows users to inspect the **expected outcome before approving**. Appropriate use cases include: - File modifications (showing diffs) - Batch edit previews - Data export summaries ### Example: File Diff Preview ```ts const approvalRequest: AssistantToolApprovalRequestFn<{ path: string }> = async ( params, editor ) => { if (!editor) { return { message: "This tool must be used in the script editor." } } const current = await editor.getFileContent(params.path) return { message: `The assistant wants to modify ${params.path}.`, primaryButtonLabel: "Apply Changes", secondaryButtonLabel: "Cancel", previewButton: { label: "Preview Diff", action: () => { if (current != null) { editor.openDiffEditor(params.path, current + "\n// New content") } } } } } ``` *** ## Registering the Execute-With-Approval Function ### Registration ```ts AssistantTool.registerExecuteToolWithApproval

(executeFn) ``` ### Function Signature ```ts type AssistantToolExecuteWithApprovalFn

= ( params: P, userAction: UserActionForApprovalRequest, scriptEditorProvider?: ScriptEditorProvider ) => Promise<{ success: boolean message: string }> ``` *** ## Handling `userAction` ```ts type UserActionForApprovalRequest = { primaryConfirmed: boolean secondaryConfirmed: boolean } ``` Handling rules: - Execute real logic **only if** `primaryConfirmed === true` - `secondaryConfirmed === true` usually means the user cancelled - If both values are `false`, treat it as an unconfirmed or aborted flow ### Example: Location Request ```ts const executeWithApproval: AssistantToolExecuteWithApprovalFn<{}> = async ( params, { primaryConfirmed } ) => { if (!primaryConfirmed) { return { success: false, message: "User cancelled the location request." } } try { const location = await Location.requestCurrent() return { success: true, message: [ "User location retrieved successfully.", `${location.latitude}`, `${location.longitude}` ].join("\n") } } catch { return { success: false, message: "Failed to retrieve user location." } } } ``` *** ## `autoApprove` and `userAction` Semantics When auto-approval actually occurs (user enables auto-approve **and** the tool sets `autoApprove: true`): - The system skips manual user interaction - The execute phase is still invoked - `userAction` represents an equivalent of **primary confirmation** (e.g. `primaryConfirmed: true`) Therefore, execution logic must **always** gate side effects on `primaryConfirmed`, ensuring compatibility with: - Manual approval - Automatic approval - Explicit cancellation *** ## `scriptEditorProvider` in Approval-Required Tools - If `scriptEditorOnly: true`: - Both approval request and execute functions receive `scriptEditorProvider` - Can be used for diffs, file access, lint inspection, etc. - If `scriptEditorOnly: false`: - `scriptEditorProvider` may be `undefined` - Editor capabilities must not be assumed *** ## Designing the Execution Message For approval-required tools, the returned `message` should: - Clearly indicate success, failure, or cancellation - Provide structured output when appropriate - Help the Assistant reason about the result Examples: ```text Location request was cancelled by the user. ``` ```text Location retrieved successfully. 39.9042 116.4074 ``` *** ## Using Test Functions Registered functions return test helpers: ```ts testApprovalFn({}) testExecuteFn({}, { primaryConfirmed: true, secondaryConfirmed: false }) ``` These are intended for validating logic paths and return values, not for simulating real system permission dialogs. *** ## Design Summary - Approval exists to ensure **informed user consent** - `autoApprove` is a **tool-level permission**, not a global switch - Auto-approval requires both user preset and tool consent - Never perform side effects during the approval request phase - Always guard execution logic with `primaryConfirmed` - Use previews whenever file or data changes are involved --- url: /TestFlight/guide/AssistantTool/Assistant Tool Without User Approval.md --- # Assistant Tool Without User Approval Tools without approval are intended for **low-risk, non-sensitive, and side-effect-free operations** that the Assistant can execute immediately. *** ## 1. When to Use Tools Without Approval Before choosing this model, you should clearly understand its boundaries. ### Suitable Scenarios Tools without approval are appropriate when the tool: - Does not access system permissions or private user data - Does not read or modify sensitive user information - Does not perform irreversible or destructive operations - Performs pure logic, computation, or deterministic transformations Typical examples include: - Code or text formatting - Parsing structured data (JSON, YAML, CSV) - Generating boilerplate or template code - Safe and predictable edits inside the script editor *** ### Unsuitable Scenarios Do **not** use tools without approval when the tool: - Accesses location, photos, contacts, calendar, or similar data - Writes or overwrites user files in an irreversible way - Triggers system dialogs or permission requests - Produces results that are difficult for the user to anticipate These cases should use **approval-required AssistantTools** (see Document 3). *** ## 2. Configuration in `assistant_tool.json` For a tool that does not require approval, the configuration must explicitly declare: ```json { "displayName": "Format Script", "id": "format_script", "description": "Formats the current script files according to the project style.", "icon": "wand.and.stars", "color": "systemIndigo", "parameters": [], "requireApproval": false, "autoApprove": false, "scriptEditorOnly": true } ``` ### Key Fields - `requireApproval: false` Indicates that the tool will execute immediately without showing an approval dialog. - `autoApprove` Typically irrelevant in this mode and can be set to `false`. - `scriptEditorOnly` Determines whether the tool is restricted to the script editor. When `true`, the execution function receives a `ScriptEditorProvider`. *** ## 3. Execution Registration API Tools without approval register a single execution function: ```ts AssistantTool.registerExecuteTool

(executeFn) ``` The corresponding function signature is: ```ts type AssistantToolExecuteFn

= ( params: P, scriptEditorProvider?: ScriptEditorProvider ) => Promise<{ success: boolean message: string }> ``` *** ## 4. Minimal Example (No Parameters) ```ts type FormatParams = {} const formatScript: AssistantToolExecuteFn = async ( params, scriptEditorProvider ) => { if (!scriptEditorProvider) { return { success: false, message: "This tool can only be used inside the script editor." } } const files = scriptEditorProvider.getAllFiles() for (const file of files) { const content = await scriptEditorProvider.getFileContent(file) if (!content) continue const formatted = content.trim() await scriptEditorProvider.updateFileContent(file, formatted) } return { success: true, message: "All script files have been formatted successfully." } } const testFormatTool = AssistantTool.registerExecuteTool(formatScript) ``` *** ## 5. Example with Parameters ### Parameter Definition ```ts type ReplaceParams = { searchText: string replaceText: string } ``` ### Execution Logic ```ts const replaceInScripts: AssistantToolExecuteFn = async ( params, scriptEditorProvider ) => { if (!scriptEditorProvider) { return { success: false, message: "Script editor context is required." } } const files = scriptEditorProvider.getAllFiles() let affectedFiles = 0 for (const file of files) { const content = await scriptEditorProvider.getFileContent(file) if (!content) continue if (!content.includes(params.searchText)) continue const updated = content.replaceAll( params.searchText, params.replaceText ) await scriptEditorProvider.updateFileContent(file, updated) affectedFiles++ } return { success: true, message: `Replaced text in ${affectedFiles} file(s).` } } AssistantTool.registerExecuteTool(replaceInScripts) ``` *** ## 6. Designing the Return Message The `message` field is **consumed by the Assistant** and should be treated as structured output rather than raw logs. Recommended guidelines: - The first line should summarize the result - Avoid returning excessively large or unstructured content - Use multi-line output or lightweight markup when structure is helpful Examples: ```text Operation completed successfully. Affected files: 3 ``` or: ```text Replacement summary: 3 foo bar ``` *** ## 7. Error Handling Guidelines - Convert all expected failures into `{ success: false, message }` - Avoid throwing uncaught exceptions - Error messages should be clear and actionable, without exposing internal details Example: ```ts return { success: false, message: "Invalid parameters: searchText cannot be empty." } ``` *** ## 8. Progress Reporting (Optional) For long-running tools, progress updates can be reported via: ```ts AssistantTool.report("Formatting file: index.tsx") ``` Use progress reporting sparingly and only at meaningful stages. *** ## 9. Using the Test Function `registerExecuteTool` returns a test function that can be executed inside the script editor: ```ts testFormatTool({}) ``` Test functions are useful for: - Verifying parameter mapping - Validating execution logic - Debugging tools without triggering an Assistant conversation *** ## 10. Best Practices Summary - “No approval” does not mean “no risk” — evaluate carefully - Keep tool behavior deterministic and predictable - Avoid hidden side effects - Design output messages to support the Assistant’s next reasoning step --- url: /TestFlight/guide/AssistantTool/AssistantTool User-Initiated Cancellation.md --- # AssistantTool User-Initiated Cancellation To improve the user experience of long-running tools, AssistantTool introduces support for **user-initiated cancellation**. When a user cancels a tool while it is executing, developers may optionally provide an `onCancel` callback to return partially completed results. If `onCancel` is not implemented, cancellation is handled automatically by the system and no additional logic is required. This mechanism is particularly suitable for search, analysis, crawling, batch processing, and other multi-step or time-consuming tools. *** ## Capability Overview The cancellation mechanism introduces the following APIs: ```ts type OnCancel = () => string | null | undefined var onCancel: OnCancel | null | undefined const isCancelled: boolean ``` *** ## Core Semantics ### onCancel Is Optional - Implementing `onCancel` is optional - Not implementing `onCancel` is a fully valid and supported usage When `onCancel` is not set: - If the user clicks Cancel, the tool is marked as cancelled - Any results returned by the execution function after cancellation are automatically ignored - The Assistant does not consume or process those results The outcome is that developers do not need to write any additional logic to handle cancellation unless they explicitly want to return partial results. *** ### Purpose of onCancel The sole purpose of implementing `onCancel` is to proactively return **partially completed results** when a user cancels execution. It is an experience optimization, not a required responsibility. *** ## Semantics of isCancelled - `AssistantTool.isCancelled` becomes `true` immediately after the user cancels - It can be read at any point during execution - It is intended to control whether subsequent steps should continue running Typical uses include breaking loops, skipping expensive operations, and releasing resources early. *** ## When and Where to Register onCancel `onCancel` must be registered **inside the tool execution function**. Reasons: - Each tool execution has its own lifecycle - After execution completes, the system automatically resets `onCancel` to `null` - Registering `onCancel` outside the execution function has no effect Valid locations include: - `registerExecuteTool` - `registerExecuteToolWithApproval` *** ## Minimal Example Without onCancel ```ts const executeTool = async () => { await doSomethingSlow() return { success: true, message: "Operation completed." } } ``` Behavior: - If the user cancels, the tool is marked as cancelled - The returned result is automatically ignored - No additional cancellation handling is required *** ## Example With onCancel and Partial Results ```ts const executeTool = async () => { const partialResults: string[] = [] AssistantTool.onCancel = () => { return [ "Operation was cancelled by the user.", "Partial results:", ...partialResults ].join("\n") } for (const item of items) { if (AssistantTool.isCancelled) break const result = await process(item) partialResults.push(result) } return { success: true, message: partialResults.join("\n") } } ``` Behavior: - When the user cancels, `onCancel` is invoked immediately - Partially completed results are returned to the Assistant - Subsequent execution stops based on `isCancelled` *** ## Typical Use Cases Good candidates for implementing `onCancel` include: - Multi-source search and aggregation - Crawling and parsing multiple documents - Project-wide scanning and analysis - Batch computation or generation tasks - Long-running reasoning or processing pipelines Cases where `onCancel` is usually unnecessary include: - Tools that complete almost instantly - Operations with no meaningful intermediate results - Tasks where cancellation produces no useful partial output *** ## Return Value Guidelines for onCancel Return values from `onCancel` follow these rules: - Returning a `string` The string is used as the cancellation message sent to the Assistant - Returning `null` or `undefined` Indicates that no message should be returned (valid but generally not recommended) Recommended practices: - Clearly state that the operation was cancelled by the user - Explicitly label partial output as partial results when applicable *** ## Relationship to the Approval Flow - `onCancel` applies only during the execution phase - It does not affect the Approval Request phase - It works for both manually approved tools and auto-approved tools Important distinction: - `secondaryConfirmed` Indicates the user declined execution during the approval phase - `onCancel` Indicates the user cancelled during execution after approval *** ## Common Mistakes and Caveats ### Registering onCancel Outside the Execution Function This is ineffective because the execution context has already ended. *** ### Performing Expensive or Side-Effect Operations in onCancel `onCancel` should return quickly and must not initiate network requests, file writes, or other side effects. *** ### Ignoring isCancelled During Execution Even with `onCancel` implemented, long-running logic should explicitly check `isCancelled` to avoid unnecessary resource usage. *** ## Recommended Execution Structure ```ts const executeTool = async () => { const partial = [] AssistantTool.onCancel = () => { return formatPartialResult(partial) } for (const item of items) { if (AssistantTool.isCancelled) break const r = await process(item) partial.push(r) } if (AssistantTool.isCancelled) { return } return { success: true, message: formatFinalResult(partial) } } ``` *** ## Design Philosophy - User cancellation is a normal interaction, not an error - `onCancel` is an optional enhancement, not a requirement - Not implementing `onCancel` does not break tool behavior - The system automatically ignores results after cancellation - Implement `onCancel` only to provide a more graceful completion experience *** ## Summary - AssistantTool supports user-initiated cancellation during execution - `onCancel` allows returning partially completed results - `isCancelled` enables early termination of execution logic - Cancellation is safely handled even without custom logic - Developers are not required to manage cancellation explicitly --- url: /TestFlight/guide/AssistantTool/Overview and Workflow.md --- # Overview and Workflow AssistantTool is an extensible tool mechanism in **Scripting** that provides system-level capabilities to the Assistant. By defining and implementing Assistant Tools, script authors can expose structured functionality—such as device access, file manipulation, and data processing—that the Assistant can invoke during a conversation and then continue reasoning based on the execution result. The design of AssistantTool focuses on two core goals: - Granting the Assistant additional capabilities while maintaining strong control and transparency, especially for sensitive operations that require explicit user approval. - Providing clear inputs, clear outputs, and testability, so tools can be reasoned about, validated, and safely composed by the Assistant. *** ## Types of AssistantTools From an execution model perspective, AssistantTool has two primary types, corresponding to two different registration APIs. ### Tools Without User Approval - `assistant_tool.json`: `requireApproval: false` - Code registration: - `AssistantTool.registerExecuteTool

(executeFn)` Typical use cases include: - Pure computation or data transformation - Formatting, parsing, or code generation - Non-sensitive operations that do not access device permissions or private user data In this mode, the tool is executed immediately once the Assistant decides to invoke it. *** ### Tools With User Approval - `assistant_tool.json`: `requireApproval: true` - Code registration: - `AssistantTool.registerApprovalRequest

(requestFn)` - `AssistantTool.registerExecuteToolWithApproval

(executeFn)` Typical use cases include: - Accessing privacy-sensitive resources (location, photos, contacts, calendar, etc.) - Modifying user data or project files - Any operation where the user should explicitly understand and confirm what will happen Optionally, tools can provide a preview action that allows the user to inspect the expected outcome before approving execution. > The `autoApprove` flag determines **whether this specific tool is allowed to be auto-approved**. *** ## Script-Editor-Only Tools The `scriptEditorOnly` field in `assistant_tool.json` determines whether a tool can only be used inside the script editor. - `scriptEditorOnly: false` - The tool may be invoked in regular Assistant conversations. - A `ScriptEditorProvider` instance is usually not available. - `scriptEditorOnly: true` - The tool is restricted to the script editor context. - Execution functions receive an additional `scriptEditorProvider?: ScriptEditorProvider` parameter. - Intended for editor-centric tools such as code formatting, batch file edits, diff previews, or lint-driven refactors. *** ## Tool Creation and File Structure When an AssistantTool is created, the system generates two files: - `assistant_tool.json` Declares the tool’s metadata, parameters, and execution constraints. - `assistant_tool.tsx` Contains the implementation logic and registers the tool using the AssistantTool APIs. ### Responsibility of `assistant_tool.json` The configuration file is responsible for: - UI presentation (displayName, icon, color, description) - Tool routing (unique `id`) - Invocation constraints (parameters, requireApproval, autoApprove, scriptEditorOnly) Execution logic is never implemented in this file and always lives in `assistant_tool.tsx`. *** ## Parameters and Input Mapping Tool input parameters are defined in `assistant_tool.json` and are passed into execution functions as a typed `params` object. These parameters are provided to: - `AssistantToolApprovalRequestFn

(params, scriptEditorProvider?)` - `AssistantToolExecuteWithApprovalFn

(params, userAction, scriptEditorProvider?)` - `AssistantToolExecuteFn

(params, scriptEditorProvider?)` Key conventions: - `P` is a TypeScript type declared in `assistant_tool.tsx`. - The runtime maps JSON parameters into the `params` object. - Tools without parameters typically use `P = {}`. *** ## Runtime Execution Flow From a runtime perspective, a complete tool invocation follows one of the flows below. ### Approval-Required Flow 1. The Assistant selects a tool and constructs `params`. 2. The system invokes the registered approval request function. 3. The approval request returns: - A message shown to the user - Optional title - Optional preview button - Optional primary and secondary button labels 4. The user’s selection is captured as `UserActionForApprovalRequest`. 5. The system invokes the execution function registered via `registerExecuteToolWithApproval`. 6. The execution function returns `{ success, message }` to the Assistant. *** ### No-Approval Flow 1. The Assistant selects a tool and constructs `params`. 2. The system directly invokes the execution function registered via `registerExecuteTool`. 3. The execution function returns `{ success, message }` to the Assistant. *** ## Tool Output Contract All execution functions return a uniform result structure: ```ts { success: boolean message: string } ``` The `message` field is treated as structured output consumable by the Assistant. Recommended patterns include: - Natural language summaries for simple results - Lightweight markup for structured data - Multi-line output where the first line is a summary and subsequent lines provide details Returning excessively large raw data is discouraged. For editor-related tools, previews and diffs should be surfaced through UI mechanisms instead. *** ## Test Functions Each registration API returns a corresponding test function for use inside the script editor: - Approval request registration → approval request test function - Execution with approval registration → execution test function with user action - Execution without approval registration → execution test function These functions allow developers to: - Validate parameter mapping - Verify execution logic and output - Debug tools without relying on an actual Assistant conversation *** ## Progress Reporting `AssistantTool.report(message: string, id?: string)` can be used to emit progress updates while a tool is running. The parameter `id` can be used to update an existing report. Typical use cases include: - Long-running operations - Multi-step processes - Debugging and observability Progress messages should be meaningful and not emitted excessively. --- url: /TestFlight/guide/AssistantTool/ScriptEditorProvider and Editor Types.md --- # ScriptEditorProvider and Editor Types This document provides a detailed reference for editor-related capabilities available to AssistantTools in the **script editor context** (`scriptEditorOnly: true`). It explains the `ScriptEditorProvider` interface and its associated types, along with recommended usage patterns and constraints. *** ## 1. Role and Responsibilities of ScriptEditorProvider `ScriptEditorProvider` is the communication interface between an AssistantTool and the **script editor**. An instance of this interface is provided when: - The tool declares `scriptEditorOnly: true` in `assistant_tool.json` - The tool is executed inside the script editor (including via test functions) Its primary responsibilities are: - Providing controlled access to the script project’s file system - Enabling structured and traceable file modifications - Exposing lint and syntax diagnostics - Supporting preview-first workflows via diff visualization *** ## 2. Project-Level Information ### `scriptName` ```ts readonly scriptName: string ``` Represents the name of the current script project. Typical use cases: - Including scope information in the returned `message` - Providing context in logs or `AssistantTool.report` output *** ## 3. File and Directory Discovery ### Checking File Existence ```ts exists(relativePath: string): boolean ``` - `relativePath` is relative to the project root - Commonly used for validation or conditional file creation *** ### Retrieving All Folders ```ts getAllFolders(): string[] ``` Returns all folder paths in the project (relative paths). Typical use cases: - Generating files in bulk - Inspecting project structure - Building grouping or navigation logic *** ### Retrieving All Files ```ts getAllFiles(): string[] ``` Returns all file paths in the project (relative paths). Typical use cases: - Project-wide scans - Batch formatting, search, or replacement - Mapping lint diagnostics to files *** ## 4. Reading and Writing File Content ### Reading File Content ```ts getFileContent(relativePath: string): Promise ``` - Returns `null` if the file does not exist - Callers should handle the null case explicitly *** ### Updating an Entire File ```ts updateFileContent(relativePath: string, content: string): Promise ``` - Replaces the entire file content - Suitable for deterministic operations such as formatting - Not recommended for complex or fine-grained edits *** ### Writing to a File (Auto-Creation) ```ts writeToFile(relativePath: string, content: string): Promise ``` - Creates the file if it does not exist - Overwrites existing content - Commonly used for file generation or templates *** ## 5. Structured Editing APIs (Recommended) For most editor tools, **structured edits are safer and more predictable** than full-file replacements. *** ### `ScriptEditorFileOperation` ```ts type ScriptEditorFileOperation = { startLine: number content: string } ``` Semantics: - `startLine` is **1-based** - `content` is the text to insert or replace - The end line is implicit and determined by the editing operation *** ### Inserting Content ```ts insertContent( relativePath: string, operations: ScriptEditorFileOperation[] ): Promise ``` Behavior: - Inserts content **before** the specified line - Operations are applied in array order - Line numbers refer to the original file state Recommended practice: apply insert operations **from bottom to top** to avoid line-shift issues. Typical use cases: - Inserting imports, comments, or new functions - Augmenting existing code blocks *** ### Replacing Content ```ts replaceInFile( relativePath: string, operations: ScriptEditorFileOperation[] ): Promise ``` Behavior: - Replaces content starting at `startLine` - Intended for precise, line-based substitutions - Not suitable for fuzzy or pattern-based replacements *** ## 6. Diff Preview Support ### `openDiffEditor` ```ts openDiffEditor(relativePath: string, content: string): void ``` Displays a diff view comparing: - The current file content - The provided prospective content This method does **not** modify the file. Recommended usage: - During the Approval Request phase - As the action of a `previewButton` - Before any batch or destructive modifications *** ## 7. Lint and Syntax Diagnostics ### `ScriptLintError` ```ts type ScriptLintError = { line: number message: string } ``` Represents a single lint or syntax error. *** ### Retrieving Lint Errors ```ts getLintErrors(): Record ``` Return structure: - Key: file path (relative) - Value: array of lint errors for that file *** ### Common Usage Pattern - Scan all lint errors - Identify affected files and line numbers - Optionally attempt safe, deterministic fixes - Summarize diagnostics in the tool’s result message Example: ```ts const errors = editor.getLintErrors() for (const file in errors) { for (const error of errors[file]) { // error.line // error.message } } ``` *** ## 8. Usage Constraints and Safety Guidelines Important constraints and recommendations: - All paths must be **relative paths** - Do not assume file content always exists - Avoid concurrent modifications to the same file - Prefer structured edits over full replacements - Provide diff previews for batch or impactful changes *** ## 9. Recommended Workflow for Editor-Based AssistantTools A robust editor-based AssistantTool typically follows this flow: 1. Scan the project using `getAllFiles` or `getLintErrors` 2. Compute the intended changes 3. In the Approval Request phase: - Clearly explain the changes - Provide a diff preview via `openDiffEditor` 4. In the Execute phase: - Perform changes only after confirmation - Use structured editing APIs 5. Return a concise, structured execution summary *** ## 10. Summary - `ScriptEditorProvider` is the bridge between AssistantTools and the script editor - It enables **controlled, structured, and previewable** file operations - Editor-based tools should prioritize predictability and user trust - Combining Approval flows with previews leads to high-confidence editing experiences --- url: /TestFlight/guide/Changelog/2.4.3/Animation and Transition.md --- Scripting Animation & Transition System # Animation Class The `Animation` class describes how values animate in time. ## Factory Methods ### `Animation.default()` Creates a default system animation. ```ts static default(): Animation ``` *** ### `Animation.linear(duration?)` ```ts static linear(duration?: number | null): Animation ``` Constant-speed animation. *** ### `Animation.easeIn(duration?)` ```ts static easeIn(duration?: number | null): Animation ``` *** ### `Animation.easeOut(duration?)` ```ts static easeOut(duration?: number | null): Animation ``` *** ### `Animation.bouncy(options?)` ```ts static bouncy(options?: { duration?: number extraBounce?: number }): Animation ``` Spring-like animation with additional bounce. *** ### `Animation.smooth(options?)` ```ts static smooth(options?: { duration?: number extraBounce?: number }): Animation ``` *** ### `Animation.snappy(options?)` ```ts static snappy(options?: { duration?: number extraBounce?: number }): Animation ``` *** ### `Animation.spring(options?)` Supports two mutually exclusive modes. ```ts static spring(options?: { blendDuration?: number } & ( | { duration?: number bounce?: number response?: never dampingFraction?: never } | { response?: number dampingFraction?: number duration?: never bounce?: never } )): Animation ``` *** ### `Animation.interactiveSpring(options?)` ```ts static interactiveSpring(options?: { response?: number dampingFraction?: number blendDuration?: number }): Animation ``` *** ### `Animation.interpolatingSpring(options?)` ```ts static interpolatingSpring(options?: { mass?: number stiffness: number damping: number initialVelocity?: number } | { duration?: number bounce?: number initialVelocity?: number mass?: never stiffness?: never damping?: never }): Animation ``` *** ## Modifier Methods ### `.delay(time)` ```ts delay(time: number): Animation ``` ### `.repeatCount(count, autoreverses)` ```ts repeatCount(count: number, autoreverses?: boolean): Animation ``` ### `.repeatForever(autoreverses)` ```ts repeatForever(autoreverses?: boolean): Animation ``` *** # Transition Class `Transition` describes how a view enters or leaves the hierarchy. ## Instance Methods ### `.animation(animation)` Attach a specific animation to a transition. ```ts animation(animation?: Animation): Transition ``` ### `.combined(other)` Combine transitions. ```ts combined(other: Transition): Transition ``` *** ## Static Transitions ### Identity ```ts Transition.identity() ``` ### Move ```ts Transition.move(edge: Edge) ``` ### Offset ```ts Transition.offset(position?: Point) ``` ### Push ```ts Transition.pushFrom(edge: Edge) ``` ### Opacity ```ts Transition.opacity() ``` ### Scale ```ts Transition.scale(scale?: number, anchor?: Point | KeywordPoint) ``` ### Slide ```ts Transition.slide() ``` ### Fade ```ts Transition.fade(duration?: number) ``` ### Flip transitions ```ts Transition.flipFromLeft(duration?) Transition.flipFromRight(duration?) Transition.flipFromTop(duration?) Transition.flipFromBottom(duration?) ``` ### Asymmetric ```ts Transition.asymmetric(insertion: Transition, removal: Transition) ``` *** # withAnimation ```ts function withAnimation(body: () => void): Promise function withAnimation(animation: Animation, body: () => void): Promise function withAnimation( animation: Animation, completionCriteria: "logicallyComplete" | "removed", body: () => void ): Promise ``` Wraps a state update and animates any affected values. Example: ```ts withAnimation(Animation.easeOut(0.3), () => { visible.setValue(false) }) ``` *** # Correct Usage of the animation View Modifier ### (Important Correction) In Scripting, the `animation` prop is **not**: ```tsx animation={anim} // incorrect ``` The correct format is: ```tsx animation={{ animation: anim, value: }} ``` ### Meaning: | Field | Description | | ----------- | ----------------------------------------------------- | | `animation` | The `Animation` instance to use | | `value` | The observable value whose changes should be animated | This mirrors SwiftUI’s `.animation(animation, value: value)` modifier. *** ## Correct Examples ### Example: Animate size changes ```tsx const size = useObservable(100) const anim = Animation.spring({ duration: 0.3, bounce: 0.3 }) `) await webView.present({ navigationTitle: 'WebView Demo' }) webView.dispose() ``` --- url: /TestFlight/guide/Changelog/2.4.6/Device/index.md --- # Device The `Device` namespace provides access to information about the current device and its environment, including hardware characteristics, system details, screen metrics, battery status, orientation, proximity sensor state, locale and language settings, wake lock control, and network interfaces. This API is commonly used to adapt UI layouts, behavior, and feature availability based on the device’s runtime context. *** ## Orientation Represents the physical orientation of the device. ```ts type Orientation = | "portrait" | "portraitUpsideDown" | "landscapeLeft" | "landscapeRight" | "faceUp" | "faceDown" | "unknown" ``` ### Description - `portrait`: Portrait orientation, default upright position - `portraitUpsideDown`: Portrait orientation, upside down - `landscapeLeft`: Landscape orientation rotated to the left - `landscapeRight`: Landscape orientation rotated to the right - `faceUp`: Device is lying flat with the screen facing upward - `faceDown`: Device is lying flat with the screen facing downward - `unknown`: Orientation cannot be determined *** ## InterfaceOrientation Represents the supported interface orientations for the app. ```ts type InterfaceOrientation = | "portrait" | "portraitUpsideDown" | "landscape" | "landscapeLeft" | "landscapeRight" | "all" | "allButUpsideDown" ``` ### Description - `portrait`: Portrait orientation, default upright position - `portraitUpsideDown`: Portrait orientation, upside down - `landscape`: Landscape orientation, either left or right - `landscapeLeft`: Landscape orientation rotated to the left - `landscapeRight`: Landscape orientation rotated to the right - `all`: All supported orientations - `allButUpsideDown`: All supported orientations except upside down *** ## NetworkInterface Describes a single network interface address. ```ts type NetworkInterface = { address: string netmask: string | null family: "IPv4" | "IPv6" mac: string | null isInternal: boolean cidr: string | null } ``` ### Properties - `address`: IP address - `netmask`: Subnet mask - `family`: Address family, either IPv4 or IPv6 - `mac`: MAC address (may be null depending on system restrictions) - `isInternal`: Indicates whether the interface is internal (for example, loopback) - `cidr`: CIDR notation, such as `192.168.1.10/24` *** ## BatteryState Represents the current battery state. ```ts type BatteryState = "full" | "charging" | "unplugged" | "unknown" ``` ### Description - `full`: Battery is fully charged - `charging`: Device is currently charging - `unplugged`: Device is not connected to power - `unknown`: Battery state cannot be determined *** ## Device Information ### model ```ts const model: string ``` The device model, such as `"iPhone"` or `"iPad"`. *** ### localizedModel ```ts const localizedModel: string ``` The localized name of the device model. *** ### systemVersion ```ts const systemVersion: string ``` The current operating system version, for example `"18.2"`. *** ### systemName ```ts const systemName: string ``` The name of the operating system, such as `"iOS"`, `"iPadOS"`, or `"macOS"`. *** ### isiPad / isiPhone ```ts const isiPad: boolean const isiPhone: boolean ``` Indicates whether the current device is an iPad or an iPhone. *** ### screen ```ts const screen: { width: number height: number scale: number } ``` Screen metrics: - `width`: Screen width in logical pixels - `height`: Screen height in logical pixels - `scale`: Screen scale factor (for example, 2 or 3) *** ## Battery and Sensors ### batteryState ```ts const batteryState: BatteryState ``` The current battery state. *** ### batteryLevel ```ts const batteryLevel: number ``` The current battery level, expressed as a value between `0.0` and `1.0`. *** ### proximityState ```ts const proximityState: boolean ``` The state of the proximity sensor. `true` indicates that the device is close to the user, such as during a phone call. *** ## Orientation and Layout ### isLandscape / isPortrait / isFlat ```ts const isLandscape: boolean const isPortrait: boolean const isFlat: boolean ``` - `isLandscape`: Indicates whether the device is in a landscape orientation - `isPortrait`: Indicates whether the device is in a portrait orientation - `isFlat`: Indicates whether the device is lying flat (face up or face down) *** ### orientation ```ts const orientation: Orientation ``` The current physical orientation of the device. *** ### supportedInterfaceOrientations ```ts var supportedInterfaceOrientations: InterfaceOrientation[] ``` The list of supported interface orientations. You can set this property to limit the orientations that your page supports. #### Example ```tsx function Page() { useEffect(() => { Device.supportedInterfaceOrientations = ["all"] return () => { Device.supportedInterfaceOrientations = ["portrait"] } }, []) return ... } ``` *** ## Appearance and Environment ### colorScheme ```ts const colorScheme: ColorScheme ``` The current system color scheme, such as light or dark mode. *** ### isiOSAppOnMac ```ts const isiOSAppOnMac: boolean ``` Indicates whether the current process is an iPhone or iPad app running on macOS. *** ## Locale and Language ### systemLocale ```ts const systemLocale: string ``` The current system locale, for example `"en_US"`. *** ### preferredLanguages ```ts const preferredLanguages: string[] ``` The user’s preferred languages, for example: ```ts ["en-US", "zh-Hans-CN"] ``` *** ### systemLocales (Deprecated) ```ts const systemLocales: string[] ``` Deprecated. Use `preferredLanguages` instead. *** ### systemLanguageTag ```ts const systemLanguageTag: string ``` The current language tag, such as `"en-US"`. *** ### systemLanguageCode ```ts const systemLanguageCode: string ``` The current language code, such as `"en"`. *** ### systemCountryCode ```ts const systemCountryCode: string | undefined ``` The current country code, such as `"US"`. *** ### systemScriptCode ```ts const systemScriptCode: string | undefined ``` The script code of the current locale, such as `"Hans"` for Simplified Chinese. *** ## Wake Lock ### isWakeLockEnabled ```ts const isWakeLockEnabled: Promise ``` Retrieves whether the wake lock is currently enabled, preventing the device from automatically sleeping. *** ### setWakeLockEnabled ```ts function setWakeLockEnabled(enabled: boolean): void ``` Enables or disables the wake lock. Notes: - Available only in the **Scripting app** - When enabled, the device will remain awake and not auto-lock *** ## Battery Listeners ### addBatteryStateListener ```ts function addBatteryStateListener( callback: (state: BatteryState) => void ): void ``` Registers a listener for battery state changes. *** ### removeBatteryStateListener ```ts function removeBatteryStateListener( callback?: (state: BatteryState) => void ): void ``` Removes a battery state listener. If `callback` is not provided, all battery state listeners are removed. *** ### addBatteryLevelListener ```ts function addBatteryLevelListener( callback: (level: number) => void ): void ``` Registers a listener for battery level changes. *** ### removeBatteryLevelListener ```ts function removeBatteryLevelListener( callback?: (level: number) => void ): void ``` Removes a battery level listener. If `callback` is not provided, all battery level listeners are removed. *** ## Orientation Listeners ### addOrientationListener ```ts function addOrientationListener( callback: (orientation: Orientation) => void ): void ``` Starts observing device orientation changes. Notes: - This method must be called before orientation updates are delivered - Orientation updates do not work when system orientation lock is enabled *** ### removeOrientationListener ```ts function removeOrientationListener( callback?: (orientation: Orientation) => void ): void ``` Removes an orientation change listener. If `callback` is not provided, all orientation listeners are removed and observation is stopped. *** ## Proximity Listeners ### addProximityStateListener ```ts function addProximityStateListener( callback: (state: boolean) => void ): void ``` Registers a listener for proximity sensor state changes. *** ### removeProximityStateListener ```ts function removeProximityStateListener( callback?: (state: boolean) => void ): void ``` Removes a proximity state listener. If `callback` is not provided, all proximity listeners are removed. *** ## Network ### networkInterfaces ```ts function networkInterfaces(): Record ``` Returns the network interfaces available on the device. Return value: - Keys are interface names (such as `en0`, `lo0`) - Values are arrays of `NetworkInterface` objects associated with each interface This method is useful for network diagnostics, retrieving local IP addresses, and debugging connectivity issues. --- url: /TestFlight/guide/Changelog/2.4.6/Device/index_example.md --- # Example ```tsx import { Button, Device, List, Navigation, NavigationStack, Script, Text, VStack } from "scripting" function Example() { const dismiss = Navigation.useDismiss() const details: { name: string value: string | boolean | number }[] = [ { name: "Device.isiPhone", value: Device.isiPhone }, { name: "Device.isiPad", value: Device.isiPad, }, { name: "Device.systemVersion", value: Device.systemVersion, }, { name: "Device.systemName", value: Device.systemName, }, { name: "Device.isPortrait", value: Device.isPortrait, }, { name: "Device.isLandscape", value: Device.isLandscape, }, { name: "Device.isFlat", value: Device.isFlat, }, { name: "Device.batteryLevel", value: Device.batteryLevel, }, { name: "Device.batteryState", value: Device.batteryState, } ] return }} > {details.map(item => {item.name} {typeof item.value} )} } async function run() { await Navigation.present({ element: }) Script.exit() } run() ``` --- url: /TestFlight/guide/Changelog/2.4.6/Intent.md --- # Intent Scripting allows you to define custom iOS Intents using an `intent.tsx` file. These scripts can receive input from the iOS share sheet or the Shortcuts app and return structured results. With optional UI presentation, you can create interactive workflows that process data and deliver output dynamically. *** ## 1. Creating and Configuring an Intent ### 1.1 Create an Intent Script 1. Create a new script project in the Scripting app. 2. Add a file named `intent.tsx` to the project. 3. Define your logic and optionally a UI component inside the file. ### 1.2 Configure Supported Input Types Tap the project title in the editor’s title bar to open **Intent Settings**, then select supported input types: - Text - Images - File URLs - URLs This configuration enables your script to appear in the share sheet or Shortcuts when matching input is provided. *** ## 2. Accessing Input Data Inside `intent.tsx`, use the `Intent` API to access input values. | Property | Description | | -------------------------- | ----------------------------------------------------------------------------------- | | `Intent.shortcutParameter` | A single parameter passed from the Shortcuts app, with `.type` and `.value` fields. | | `Intent.textsParameter` | Array of text strings. | | `Intent.urlsParameter` | Array of URL strings. | | `Intent.imagesParameter` | Array of image file paths (UIImage objects). | | `Intent.fileURLsParameter` | Array of local file URL paths. | Example: ```ts if (Intent.shortcutParameter) { if (Intent.shortcutParameter.type === "text") { console.log(Intent.shortcutParameter.value) } } ``` *** ## 3. Returning a Result Use `Script.exit(result)` to return a result to the caller, such as the Shortcuts app or another script. Valid return types include: - Plain text: `Intent.text(value)` - Attributed text: `Intent.attributedText(value)` - URL: `Intent.url(value)` - JSON: `Intent.json(value)` - File path or file URL: `Intent.file(value)` or `Intent.fileURL(value)` Example: ```ts import { Script, Intent } from "scripting" Script.exit(Intent.text("Done")) ``` *** ## 4. Displaying Interactive UI Use `Navigation.present()` to show a UI before returning a result. You can render a React-style component and then call `Script.exit()` after the interaction completes. Example: ```ts import { Intent, Script, Navigation, VStack, Text } from "scripting" function MyIntentView() { return ( {Intent.textsParameter?.[0]} ) } async function run() { await Navigation.present({ element: }) Script.exit() } run() ``` *** ## 5. Using Intents in the Share Sheet If a script supports a specific input type (e.g., text or image), it will automatically appear as an option in the iOS share sheet: 1. Select content such as text or a file. 2. Tap the Share button. 3. Choose **Scripting** in the share sheet. 4. Scripting will list scripts that support the selected input type. *** ## 6. Using Intents in the Shortcuts App You can call scripts from the Shortcuts app with or without UI: - **Run Script**: Executes the script in the background. - **Run Script in App**: Executes the script in the foreground, with UI presentation support. Steps: 1. Open the Shortcuts app and create a new shortcut. 2. Add the **Run Script** or **Run Script in App** action from Scripting. 3. Choose the target script and pass input parameters if needed. *** ## 7. Intent API Reference ### `Intent` Properties | Property | Type | Description | | ------------------- | ------------------- | ----------------------------------------------- | | `shortcutParameter` | `ShortcutParameter` | Input from Shortcuts with `.type` and `.value`. | | `textsParameter` | `string[]` | Array of input text values. | | `urlsParameter` | `string[]` | Array of input URLs. | | `imagesParameter` | `UIImage[]` | Array of image file paths or objects. | | `fileURLsParameter` | `string[]` | Array of input file paths (local file URLs). | ### `Intent` Methods | Method | Return Type | Example | | ------------------------------ | --------------------------- | -------------------------------------- | | `Intent.text(value)` | `IntentTextValue` | `Intent.text("Hello")` | | `Intent.attributedText(value)` | `IntentAttributedTextValue` | `Intent.attributedText("Styled Text")` | | `Intent.url(value)` | `IntentURLValue` | `Intent.url("https://example.com")` | | `Intent.json(value)` | `IntentJsonValue` | `Intent.json({ key: "value" })` | | `Intent.file(path)` | `IntentFileValue` | `Intent.file("/path/to/file.txt")` | | `Intent.fileURL(path)` | `IntentFileURLValue` | `Intent.fileURL("/path/to/file.pdf")` | | `Intent.image(UIImage)` | `IntentImageValue` | `Intent.image(uiImage)` | *** ## 8. Best Practices and Notes - Always call `Script.exit()` to properly terminate the script and return a result. - When displaying a UI, ensure `Navigation.present()` is awaited before calling `Script.exit()`. - Use **"Run Script in App"** for large files or images to avoid process termination due to memory constraints. - You can use `queryParameters` when launching scripts via URL scheme if additional data is needed. --- url: /TestFlight/guide/Changelog/2.4.6/ItemProvider.md --- # ItemProvider `ItemProvider` represents a **deferred data provider** used to access content such as files, images, text, or URLs in a controlled and secure way. It is commonly used in scenarios like drag and drop, file importing, and content selection from Photos or Files. An `ItemProvider` does not store the data itself. Instead, it describes **how and under what constraints the data can be accessed**. *** ## Core Concepts - `ItemProvider` describes capabilities, not concrete data - Data loading is always subject to system security restrictions - File-based resources can only be accessed within a limited, controlled scope - Whether a file can be accessed in place is determined by the underlying system *** ## Properties ### registeredTypes ```ts readonly registeredTypes: UTType[] ``` Represents all types that the item provider can supply at a semantic level. - Includes both concrete types and inferred parent types - Useful for high-level content classification or debugging - Does not guarantee that a concrete file representation exists *** ### registeredInPlaceTypes ```ts readonly registeredInPlaceTypes: UTType[] ``` Represents the set of types that support open-in-place access. - Typically applies to large resources such as videos, audio files, or documents - Actual in-place access is determined at load time *** ## Capability Checks ### hasItemConforming ```ts hasItemConforming(type: UTType): boolean ``` Checks whether the content semantically conforms to the specified type. - Performs a broad, semantic check - Considers UTType inheritance - Suitable for branching logic and content classification *** ### hasRepresentationConforming ```ts hasRepresentationConforming(type: UTType): boolean ``` Checks whether a concrete, loadable representation exists for the specified type. - Performs a strict check - Suitable for file processing and format-specific workflows *** ### hasInPlaceRepresentationConforming ```ts hasInPlaceRepresentationConforming(type: UTType): boolean ``` Checks whether a representation supporting open-in-place access exists. - Commonly used to choose loading strategies for large files *** ## Object Loading Capabilities ### canLoadUIImage ```ts canLoadUIImage(): boolean ``` Indicates whether the content can be loaded as a `UIImage`. - Intended for UI display - Does not guarantee preservation of original format or metadata *** ### canLoadLivePhoto ```ts canLoadLivePhoto(): boolean ``` Indicates whether the content can be loaded as a `LivePhoto`. - Used to distinguish Live Photos from static images - When true, `loadLivePhoto` can be called *** ## Loading Methods ### loadUIImage ```ts loadUIImage(): Promise ``` Loads a `UIImage` object. - Suitable for lightweight display - Not intended for file-level processing or asset preservation *** ### loadLivePhoto ```ts loadLivePhoto(): Promise ``` Loads a `LivePhoto` object. - Includes both the still image and paired video - Suitable for display, saving, or further processing *** ### loadURL ```ts loadURL(): Promise ``` Loads a URL and returns it as a string. - May represent a web URL or a file URL *** ### loadText ```ts loadText(): Promise ``` Loads plain text content. - Supports plain text - Rich text is automatically converted to plain text *** ### loadData ```ts loadData(type: UTType): Promise ``` Loads raw binary data for the specified type. - The entire data payload is loaded into memory - Suitable for JSON, configuration files, or small resources - Not recommended for large files such as video or audio *** ## File Path Loading and Security Scope Access to file paths is subject to strict security rules. All file access must occur within a limited callback scope provided by the API. *** ### loadFilePath ```ts loadFilePath(type: UTType): Promise ``` Loads a file path for the specified type. If the item provider can load data as the specified type, this file will be copied to the app group's temporary directory and the file path will be returned, otherwise null will be returned. You should delete the file when it is no longer needed. Example: ```ts const filePath = provider.loadFilePath("public.movie") ``` *** ## Creating an ItemProvider ### fromUIImage ```ts ItemProvider.fromUIImage(image: UIImage): ItemProvider ``` Creates an `ItemProvider` from a `UIImage`. - Provides static image capabilities only - Does not include Live Photo or original asset information *** ### fromText ```ts ItemProvider.fromText(text: string): ItemProvider ``` Creates an `ItemProvider` from a text string. *** ### fromURL ```ts ItemProvider.fromURL(url: string): ItemProvider | null ``` Creates an `ItemProvider` from a URL string. - Returns `null` if the URL is invalid - Supports both web URLs and file URLs *** ### fromFilePath ```ts ItemProvider.fromFilePath(path: string): ItemProvider ``` Creates an `ItemProvider` from a file path. - Preserves the original file - Suitable for videos, audio, and documents - Supports open-in-place capability checks *** ## Usage Guidelines - Use `hasItemConforming` to determine content categories - Use object loading methods for UI display - Use file path loading methods for large resources - Always access files only within the provided callback scope - Never defer access to security-scoped files outside the callback --- url: /TestFlight/guide/Changelog/2.4.6/MediaComposer/MediaComposer Example.md --- # MediaComposer Example This example demonstrates how to use `MediaComposer` to compose a final video from **video, image, and audio sources**, and export it to the script directory. The workflow covered in this example includes: 1. Picking an audio file 2. Picking an image 3. Picking a video 4. Building a visual timeline (video + image) 5. Inserting audio at a specific time 6. Exporting the composed video *** ## Example Code ```tsx import { Path, Script } from "scripting" console.present().then(() => Script.exit()) async function run() { try { const audioPath = (await DocumentPicker.pickFiles({ types: ["public.audio"] })).at(0) if (audioPath == null) { console.error("no audio") return } console.log("Audio Picked") const imageResult = (await Photos.pick({ filter: PHPickerFilter.images() })).at(0) const imagePath = await imageResult?.itemProvider.loadFilePath("public.image") if (!imagePath) { console.log("No image") return } console.log("Image picked") const videoResult = (await Photos.pick({ filter: PHPickerFilter.videos() })).at(0) const videoPath = await videoResult?.itemProvider.loadFilePath("public.movie") if (videoPath == null) { console.log("No video") return } console.log("Video Picked") console.log("Start composing...") const exportPath = Path.join( Script.directory, "dest.mp4" ) const exportResult = await MediaComposer.composeAndExport({ exportPath, timeline: { videoItems: [{ videoPath: videoPath }, { imagePath: imagePath, duration: MediaTime.make({ seconds: 5, preferredTimescale: 600 }) }], audioClips: [{ path: audioPath, at: MediaTime.make({ seconds: 5, preferredTimescale: 600 }) }] } }) console.log( "Result:", exportResult.exportPath, "\n", exportResult.duration.getSeconds() ) } catch (e) { console.error(e) } } run() ``` *** ## Timeline Breakdown ### Visual Timeline (videoItems) ```ts videoItems: [ { videoPath }, { imagePath, duration: MediaTime.make({ seconds: 5, preferredTimescale: 600 }) } ] ``` - The first `VideoItem` is a full video clip - The second `VideoItem` is an image displayed for 5 seconds - All `videoItems` are concatenated **in strict order** - Final video duration = video duration + 5 seconds *** ### Audio Timeline (audioClips) ```ts audioClips: [{ path: audioPath, at: MediaTime.make({ seconds: 5, preferredTimescale: 600 }) }] ``` - The audio starts playing at **5 seconds** on the final timeline - When `at` is omitted, audio clips are appended sequentially - Audio does **not** affect the final video duration *** ## Export Result ```ts { exportPath: string duration: MediaTime } ``` - `exportPath`: the full output file path - `duration`: the total video duration (derived from `videoItems`) *** ## Common Errors and Edge Cases ### 1. ImageClip without duration ```ts { imagePath: "...", // ❌ missing duration } ``` **Issue:** - Images have no intrinsic duration - Omitting `duration` will cause composition to fail **Solution:** - Always provide an explicit `MediaTime` duration *** ### 2. Using raw numbers instead of MediaTime ```ts // ❌ incorrect at: 5 ``` **Correct usage:** ```ts at: MediaTime.make({ seconds: 5, preferredTimescale: 600 }) ``` All time values in MediaComposer **must** be represented by `MediaTime`. *** ### 3. Mixed timescales causing precision issues **Issue:** - Different media sources may use different timescales - This can lead to rounding errors during trimming, fades, or alignment **Recommendation:** - Use a consistent `preferredTimescale` (e.g. 600) - Convert external times using `convertScale` when needed *** ### 4. Audio extending beyond the video duration **Behavior:** - Audio that exceeds the end of the video does not extend the final duration - Any audio beyond the video end is automatically truncated *** ### 5. Unexpected audio balance when mixing original and external audio **Cause:** - By default, original video audio and external audio are mixed together - Without ducking, dialogue may be masked by background music *** ## Audio Ducking Behavior ### What is Ducking Ducking refers to: > Automatically lowering the volume of external audio (e.g. background music) when original video audio (e.g. dialogue) is present. *** ### Ducking Configuration ```ts exportOptions: { ducking: { enabled: true, duckedVolume: 0.25, attackSeconds: 0.15, releaseSeconds: 0.25 } } ``` #### Parameters - **enabled** Enables or disables ducking (default: `true`) - **duckedVolume** Target volume for external audio during ducking (0…1) - **attackSeconds** Ramp-down duration before original audio starts - **releaseSeconds** Ramp-up duration after original audio ends *** ### Conditions for Ducking to Apply Ducking is applied only when all of the following are true: 1. `VideoClip.keepOriginalAudio === true` 2. At least one external `AudioClip` exists 3. `exportOptions.ducking.enabled !== false` *** ## Audio Mixing Rules Summary 1. **Original Video Audio** - Included only when `keepOriginalAudio` is set to `true` 2. **External Audio** - Can be positioned or appended sequentially - Supports per-clip `volume`, `fade`, and looping 3. **Final Mix** - All audio sources are mixed into a single output track - Audio never changes the final video duration - Ducking is applied automatically during mixing --- url: /TestFlight/guide/Changelog/2.4.6/MediaComposer/MediaTime.md --- # MediaTime `MediaTime` represents **precise media time values** in audio and video processing. It is the fundamental time type used by MediaComposer in Scripting. Conceptually, `MediaTime` corresponds to a time value with an explicit time base (similar to `CMTime` in AVFoundation), but provides a safer and more expressive abstraction for the scripting layer. A `MediaTime` instance can represent **numeric time**, **invalid time**, **indefinite time**, or **infinite time**, and supports strict arithmetic and comparison operations. *** ## Key Features - Precise construction using **value + timescale** or **seconds + preferredTimescale** - Time scaling with configurable rounding methods - Safe arithmetic and comparison operations - Explicit modeling of invalid, indefinite, and infinite time values - Designed for timeline composition, trimming, alignment, fades, and placement *** ## Time Precision Model `MediaTime` is based on the following core concepts: - **value**: an integer time value - **timescale**: the number of time units per second Examples: - `value = 300`, `timescale = 600` → 0.5 seconds - `value = 18000`, `timescale = 600` → 30 seconds This model allows frame-accurate or sample-accurate timing without relying on floating-point arithmetic. *** ## Read-only Properties ### secondes ```ts readonly secondes: number ``` The time expressed in seconds as a floating-point value. This is a derived value intended mainly for display or debugging. It is **not recommended for timeline calculations**. *** ### isValid ```ts readonly isValid: boolean ``` Indicates whether the time is valid and usable for calculations. Returns `false` for invalid, indefinite, or infinite time values. *** ### isPositiveInfinity / isNegativeInfinity ```ts readonly isPositiveInfinity: boolean readonly isNegativeInfinity: boolean ``` Indicates whether the time represents positive or negative infinity. These values are typically used as internal boundary markers in timeline logic. *** ### isIndefinite ```ts readonly isIndefinite: boolean ``` Indicates whether the time is indefinite. This is commonly used when a media asset’s duration has not yet been determined. *** ### isNumeric ```ts readonly isNumeric: boolean ``` Indicates whether the time can participate in numeric calculations. Arithmetic and comparison operations should only be performed when this value is `true`. *** ### hasBeenRounded ```ts readonly hasBeenRounded: boolean ``` Indicates whether the time has undergone rounding during construction or scale conversion. This is useful when validating frame- or sample-accurate timelines. *** ## Time Conversion ### convertScale ```ts convertScale(newTimescale: number, method: MediaTimeRoundingMethod): MediaTime ``` Converts the time to a new timescale using the specified rounding method. **Typical use cases:** - Aligning video frame timing (e.g. 600, 90000) - Aligning audio sample timing (e.g. 44100, 48000) - Avoiding precision errors caused by mixed timescales *** ## Accessing Time Values ### getSeconds ```ts getSeconds(): number ``` Returns the time expressed in seconds as a floating-point value. Semantically equivalent to reading `secondes`, but clearer in intent. *** ## Time Arithmetic ### plus / minus ```ts plus(other: MediaItem): MediaItem minus(other: MediaItem): MediaItem ``` Performs time addition or subtraction and returns a new `MediaTime`. - Both operands must be numeric - The original instances are not modified - The result follows the internal time base rules *** ## Time Comparison ```ts lt(other: MediaItem): boolean gt(other: MediaItem): boolean lte(other: MediaItem): boolean gte(other: MediaItem): boolean eq(other: MediaItem): boolean neq(other: MediaItem): boolean ``` Compares two time values. - Supports strict ordering and equality checks - Produces deterministic results even for non-numeric times - Recommended for timeline sorting, trimming, and boundary checks *** ## Static Constructors ### make ```ts static make(options: { value: number timescale: number } | { seconds: number preferredTimescale: number }): MediaTime ``` Creates a `MediaTime` instance. #### Using value + timescale ```ts MediaTime.make({ value: 300, timescale: 600 }) ``` Best suited for low-level or precision-critical scenarios. *** #### Using seconds + preferredTimescale ```ts MediaTime.make({ seconds: 5, preferredTimescale: 600 }) ``` Recommended for most scripting-level use cases where seconds are the primary unit. *** ### zero ```ts static zero(): MediaTime ``` Returns a `MediaTime` representing **0 seconds**. *** ### invalid ```ts static invalid(): MediaTime ``` Returns an invalid time value. Useful for explicitly representing errors or unavailable timing information. *** ### indefinite ```ts static indefinite(): MediaTime ``` Returns an indefinite time value. Typically used when a media asset’s duration is not yet known. *** ### positiveInfinity / negativeInfinity ```ts static positiveInfinity(): MediaTime static negativeInfinity(): MediaTime ``` Returns positive or negative infinite time values. These are mainly intended for internal timeline boundary handling and are not recommended for general scripting logic. *** ## Usage Guidelines and Best Practices - Avoid using floating-point seconds directly for timeline calculations; prefer `MediaTime` - Explicitly convert timescales when mixing audio and video sources - Check `isNumeric` before performing arithmetic or comparisons - Use consistent timescales when constructing `TimeRange` or `at` values *** ## Typical Usage in MediaComposer - Placing audio or video clips on the timeline (`AudioClip.at`) - Defining trimming ranges (`TimeRange`) - Calculating precise export durations - Driving fades, alignment, looping, and synchronization behavior --- url: /TestFlight/guide/Changelog/2.4.6/MediaComposer/Quick Start.md --- # Quick Start `MediaComposer` is used in Scripting to **compose video, image, and audio timelines and export a final media file**. It provides a stable and precise timeline model that supports video clips, image clips, audio overlays, fades, audio ducking, and flexible export configuration. This module is suitable for: - Mixing videos and images into a single output - Adding background music, voice-over, or sound effects - Generating videos from image sequences - Automated and script-driven media production *** ## Design Overview MediaComposer consists of three core layers: 1. **Time Model** Based on `MediaTime` and `TimeRange` for precise time representation 2. **Timeline Model** - `VideoItem[]`: visual timeline (videos or images, sequential) - `AudioClip[]`: audio timeline (positioned or sequential) 3. **Export System** A unified `composeAndExport` API for rendering and exporting *** ## Timeline Structure ```ts timeline: { videoItems: VideoItem[] audioClips: AudioClip[] } ``` - **videoItems** Defines the visual timeline. Video and image items are concatenated strictly in array order. - **audioClips** Defines the audio timeline. Clips may be explicitly positioned or appended sequentially. The final exported duration is determined by the **videoItems timeline**. *** ## VideoItem ```ts type VideoItem = XOR ``` A `VideoItem` represents a single visual segment in the timeline. It can be either a **video clip** or an **image clip**, but never both. *** ## VideoClip ```ts type VideoClip = { videoPath: string sourceTimeRange?: TimeRange | null keepOriginalAudio?: boolean fade?: FadeConfig | null } ``` ### videoPath - Path to the video file - Local video files are supported *** ### sourceTimeRange ```ts sourceTimeRange?: TimeRange | null ``` - Specifies the portion of the source video to use - Defaults to the entire video when omitted **Common use cases:** - Trimming a video - Extracting a specific segment as material *** ### keepOriginalAudio ```ts keepOriginalAudio?: boolean ``` - Whether to keep the original audio from the video - Default: `false` **Notes:** - When `true`, the video’s original audio participates in the final mix - External `audioClips` may still be used simultaneously - Ducking behavior is controlled by `ExportOptions.ducking` *** ### fade ```ts fade?: FadeConfig | null ``` - Per-clip fade-in / fade-out configuration - Overrides the global video fade when provided *** ## ImageClip ```ts type ImageClip = { imagePath: string duration: MediaTime contentMode?: "fit" | "crop" backgroundColor?: Color fade?: FadeConfig | null } ``` `ImageClip` allows a still image to appear as a timed segment within the video timeline. *** ### imagePath - Path to the image file - Common image formats are supported (JPEG, PNG, HEIC, etc.) *** ### duration ```ts duration: MediaTime ``` - The display duration of the image clip in the video - This field is required *** ### contentMode ```ts contentMode?: "fit" | "crop" ``` - Controls how the image is scaled to the render size - Default: `fit` Behavior: - `fit`: Entire image is visible; letterboxing may occur - `crop`: Image fills the frame; excess is cropped *** ### backgroundColor ```ts backgroundColor?: Color ``` - Background color for areas not covered by the image - Commonly used together with `fit` mode *** ### fade ```ts fade?: FadeConfig | null ``` - Fade-in / fade-out configuration for the image clip - Behaves the same as fades for video clips *** ## AudioClip ```ts type AudioClip = { path: string sourceTimeRange?: TimeRange | null at?: MediaTime volume?: number fade?: FadeConfig | null loopToFitVideoDuration?: boolean } ``` `AudioClip` is used to add background music, narration, or sound effects to the final video. *** ### path - Path to the audio file *** ### sourceTimeRange - Specifies the portion of the audio to use - Defaults to the entire audio file *** ### at ```ts at?: MediaTime ``` - Explicit placement time on the final timeline - When omitted, audio clips are appended sequentially *** ### volume ```ts volume?: number ``` - Per-clip gain (0…1) - Default: `1` *** ### fade - Fade-in / fade-out configuration for the audio clip *** ### loopToFitVideoDuration ```ts loopToFitVideoDuration?: boolean ``` - Whether the audio should loop to match the total video duration - Commonly used for background music *** ## FadeConfig ```ts type FadeConfig = { fadeInSeconds?: number fadeOutSeconds?: number } ``` - Duration is expressed in seconds - Applicable to video, image, and audio clips - Defaults to 0 when omitted *** ## ExportOptions ```ts type ExportOptions = { renderSize?: Size frameRate?: number scaleMode?: VideoScaleMode globalVideoFade?: FadeConfig | null externalAudioBaseVolume?: number ducking?: DuckingConfig presetName?: ExportPreset outputFileType?: ExportFileType colorSpacePolicy?: ColorSpacePolicy } ``` ### Common options - **renderSize** Output resolution, default is 1080×1920 - **frameRate** Rendering frame rate, default is 30 - **globalVideoFade** Global fade applied to all visual clips unless overridden - **ducking** Automatically lowers external audio volume when original video audio exists - **presetName / outputFileType** Control encoding quality and output format - **colorSpacePolicy** Color space conversion policy, default is `forceSDR`, other options are `keepSource`. *** ## composeAndExport ```ts function composeAndExport(options: { exportPath: string timeline: { videoItems: VideoItem[] audioClips: AudioClip[] } exportOptions?: ExportOptions overwrite?: boolean }): Promise<{ exportPath: string duration: MediaTime }> ``` ### Parameters - **exportPath** Output file path - **timeline.videoItems** Visual timeline (videos and images, in sequence) - **timeline.audioClips** Audio timeline (positioned or sequential) - **exportOptions** Optional export configuration - **overwrite** Whether to overwrite an existing file (default: `true`) *** ### Return Value ```ts { exportPath: string duration: MediaTime } ``` - **exportPath**: final output path - **duration**: total duration of the exported video (derived from `videoItems`) *** ## Usage Guidelines and Best Practices - Always use `MediaTime` for time values; avoid raw floating-point seconds - `ImageClip.duration` must always be explicitly specified - Audio and visual timelines are independent but mixed in the final output - For complex projects, use a consistent timescale (e.g. 600) - Background music typically uses `loopToFitVideoDuration` *** ## Typical Use Cases - Mixed image and video composition - Adding background music or narration to videos - Automated video generation - Script-driven content creation pipelines --- url: /TestFlight/guide/Changelog/2.4.6/SharedAudioSession.md --- # SharedAudioSession The `SharedAudioSession` interface provides a convenient way to manage and interact with the shared audio session in your script. The audio session acts as an intermediary between your script, the Scripting app, the operating system, and the underlying audio hardware, enabling you to configure and control audio behavior effectively. ## Features - Retrieve and set audio session categories, modes, and options. - Configure the preferred sample rate for audio input and output. - Handle audio interruptions. - Query device capabilities for supported categories and modes. - Tailor audio behaviors for specific app use cases, such as video recording, voice chat, or background playback. *** ## Methods and Properties ### 1. **Session Category and Options** #### `category` Get the current audio session category. ```typescript const category = await SharedAudioSession.category console.log(category) // Example: 'playback' ``` #### `categoryOptions` Retrieve the current audio session category options. ```typescript const options = await SharedAudioSession.categoryOptions console.log(options) // Example: ['mixWithOthers', 'allowAirPlay'] ``` #### `setCategory(category: AudioSessionCategory, options: AudioSessionCategoryOptions[])` Set the audio session category with specific options. ```typescript await SharedAudioSession.setCategory('playback', ['mixWithOthers']) ``` *** ### 2. **Session Mode** #### `mode` Retrieve the current audio session mode. ```typescript const mode = await SharedAudioSession.mode console.log(mode) // Example: 'videoChat' ``` #### `setMode(mode: AudioSessionMode)` Set the audio session mode. ```typescript await SharedAudioSession.setMode('voiceChat') ``` *** ### 3. **Sample Rate** #### `preferredSampleRate` Retrieve the preferred sample rate in hertz. ```typescript const sampleRate = await SharedAudioSession.preferredSampleRate console.log(sampleRate) // Example: 44100 ``` #### `setPreferredSampleRate(sampleRate: number)` Set the preferred sample rate for audio input and output. ```typescript await SharedAudioSession.setPreferredSampleRate(48000) ``` *** ### 4. **Interruption Handling** #### `addInterruptionListener(listener: AudioSessionInterruptionListener)` Listen for audio interruptions. ```typescript SharedAudioSession.addInterruptionListener((type) => { if (type === 'began') { console.log('Audio interruption began') } else if (type === 'ended') { console.log('Audio interruption ended') } }) ``` #### `removeInterruptionListener(listener: AudioSessionInterruptionListener)` Remove an interruption listener. ```typescript SharedAudioSession.removeInterruptionListener(myListener) ``` *** ### 5. **Device Capabilities** #### `availableCategories` Get the list of audio session categories available on the device. ```typescript const categories = await SharedAudioSession.availableCategories console.log(categories) // Example: ['playback', 'record', 'soloAmbient'] ``` #### `availableModes` Get the list of audio session modes available on the device. ```typescript const modes = await SharedAudioSession.availableModes console.log(modes) // Example: ['default', 'videoChat', 'voiceChat'] ``` *** ### 6. **Additional Properties** #### `isOtherAudioPlaying` Check if other audio is currently playing on the device. ```typescript const isPlaying = await SharedAudioSession.isOtherAudioPlaying console.log(isPlaying) // Example: true ``` #### `secondaryAudioShouldBeSilencedHint` Check if secondary audio should be silenced. ```typescript const shouldSilence = await SharedAudioSession.secondaryAudioShouldBeSilencedHint console.log(shouldSilence) // Example: false ``` #### `allowHapticsAndSystemSoundsDuringRecording` Check if haptics and system sounds are allowed during recording. ```typescript const allowHaptics = await SharedAudioSession.allowHapticsAndSystemSoundsDuringRecording console.log(allowHaptics) // Example: true ``` #### `prefersNoInterruptionsFromSystemAlerts` Check if the session prefers no interruptions from system alerts. ```typescript const prefersNoInterruptions = await SharedAudioSession.prefersNoInterruptionsFromSystemAlerts console.log(prefersNoInterruptions) // Example: false ``` *** ### 7. **Session Activation** #### `setActive(active: boolean, options?: AudioSessionSetActiveOptions[])` Activate or deactivate the shared audio session with optional options. - `active`: Set to `true` to activate the session, `false` to deactivate it. - `options`: An array of optional activation options, such as 'interruptSpokenAudioAndMixWithOthers'. ```typescript await SharedAudioSession.setActive( true, ['notifyOthersOnDeactivation'] ) ``` *** ### 8. **System Settings** #### `setAllowHapticsAndSystemSoundsDuringRecording(value: boolean)` Enable or disable haptics and system sounds during recording. ```typescript await SharedAudioSession.setAllowHapticsAndSystemSoundsDuringRecording(true) ``` #### `setPrefersNoInterruptionsFromSystemAlerts(value: boolean)` Set the preference for no interruptions from system alerts. ```typescript await SharedAudioSession.setPrefersNoInterruptionsFromSystemAlerts(true) ``` *** ### 9. **Systemwide Output Volume** #### `outputVolume: number` The systemwide output volume. This property is a number between 0 and 1, representing the volume level as a percentage. #### outputVolume EventListener Type Definition: ```ts type AudioSessionOutputVolumeListener = (newValue: number, oldValue: number) => void ``` ##### `addOutputVolumeListener(listener: AudioSessionOutputVolumeListener)` Add an event listener for changes in the systemwide output volume. ```typescript SharedAudioSession.addOutputVolumeListener((newValue, oldValue) => { console.log(`Output volume changed from ${oldValue} to ${newValue}`) }) ``` ##### `removeOutputVolumeListener(listener: AudioSessionOutputVolumeListener)` Remove an event listener for changes in the systemwide output volume. *** ## Enumerations ### **AudioSessionSetActiveOptions** Optional activation options: - `'notifyOthersOnDeactivation'`: Notify other audio sessions when deactivating the shared audio session. ### **AudioSessionCategory** Defines the session's audio category: - `'ambient'`: Ambient audio, such as background music or ambient sounds. - `'multiRoute'`: Multi-route audio, such as routing distinct streams of audio data to different output devices at the same time. - `'playAndRecord'`: Play and record audio, such as voice chat or video conferencing. - `'playback'`: Playback audio, such as music or sound effects. - `'record'`: Recording audio, such as voice chat or video conferencing. - `'soloAmbient'`: Solo ambient audio, such as background music or ambient sounds. ### **AudioSessionCategoryOptions** Optional behaviors for audio categories: - `'mixWithOthers'`: Mix with other audio sessions. - `'duckOthers'`: Duck other audio sessions. - `'interruptSpokenAudioAndMixWithOthers'`: Interrupt spoken audio and mix with others. - `'allowBluetooth'`: Allow Bluetooth audio. - `'allowBluetoothA2DP'`: Allow Bluetooth A2DP audio. - `'allowAirPlay'`: Allow AirPlay audio. - `'defaultToSpeaker'`: Default to speaker, even if headphones are connected. - `'overrideMutedMicrophoneInterruption'`: Override muted microphone interruption. ### **AudioSessionMode** Specifies the session's mode: - `'default'`: Default mode. - `'gameChat'`: Game chat mode. - `'measurement'`: Measurement mode, such as audio input or output. - `'moviePlayback'`: Movie playback mode, such as movie content. - `'spokenAudio'`: Spoken audio mode, such as voice chat. - `'videoChat'`: Video chat mode, such as video conferencing. - `'videoRecording'`: Video recording mode, such as video conferencing. - `'voicePrompt'`: Voice prompt mode, such as text-to-speech. ### **AudioSessionInterruptionType** Specifies the type of interruption: - `'began'` - `'ended'` - `'unknown'` *** This interface offers extensive control over audio session management in Scripting, making it suitable for building audio-heavy script like music players and video conferencing tools. --- url: /TestFlight/guide/Changelog/2.4.6/onDrag and onDrop View Modifiers.md --- # onDrag and onDrop View Modifiers Scripting provides a Drag & Drop API closely aligned with the SwiftUI drag-and-drop interaction model. It enables views to act as drag sources, drop destinations, or both, supporting intra-app and cross-app drag-and-drop scenarios. The API is composed of three core parts: - **onDrag**: Declares a view as a drag source - **onDrop**: Declares a view as a drop destination - **DropInfo / ItemProvider / UTType**: Context objects describing drag content and state Drag and drop is a system-controlled interaction. Certain APIs are only valid during specific callbacks. These constraints are explicitly documented below and must be respected. *** ## Core Types ### DropInfo `DropInfo` represents the real-time state of a drag operation relative to a specific drop target view. It is only valid within `onDrop` callbacks. ### Properties #### location: Point - The current drag location - Expressed in the **local coordinate space of the drop view** - Commonly used for: - Insertion indicators - Reordering logic - Position-based highlighting ### Methods #### hasItemsConforming(types: UTType\[]): boolean - Indicates whether at least one dragged item conforms to any of the specified UTTypes - Commonly used in: - `validateDrop` - `dropEntered` - `dropUpdated` - This method performs capability checks only and does not load data #### itemProviders(types: UTType\[]): ItemProvider\[] - Returns all `ItemProvider` instances conforming to the specified UTTypes - **Only valid inside the `performDrop` callback** - After `performDrop` returns, access to the dragged data is revoked by the system > Critical constraint > You must **start loading the contents** of the returned `ItemProvider` instances **within the scope of `performDrop`**. > Loading may complete later, but it must be initiated synchronously before `performDrop` returns. *** ## DropOperation `DropOperation` describes the action a drop target intends to perform. Available values: - `"copy"` Copies the dragged data (default and most common) - `"move"` Moves the data instead of copying it (typically internal to the app) - `"cancel"` Cancels the drag operation and transfers no data - `"forbidden"` Explicitly disallows the drop at the current location `DropOperation` is usually returned from `dropUpdated` to dynamically control the drag behavior. *** ## DragDropProps `DragDropProps` defines the optional drag-and-drop capabilities that a view may adopt. *** ## onDrag ### Purpose Marks the view as a **drag source**, allowing the user to initiate a drag operation from it. ### Definition ```ts onDrag?: { data: () => ItemProvider preview: VirtualNode } ``` ### Parameters #### data ```ts data: () => ItemProvider ``` - Returns an `ItemProvider` describing the dragged data - Supports text, images, files, URLs, and custom types - Invoked each time a drag begins Recommended practice: Create a new `ItemProvider` instance for each drag operation. Do not reuse instances. #### preview ```ts preview: VirtualNode ``` - A view used as the drag preview - Rendered by the system as a floating representation during dragging - Centered over the source view by default *** ## onDrop ### Purpose Marks the view as a **drop destination** and provides fine-grained control over validation, interaction updates, and data handling. ### Definition ```ts onDrop?: { types: UTType[] validateDrop?: (info: DropInfo) => boolean dropEntered?: (info: DropInfo) => void dropUpdated?: (info: DropInfo) => DropOperation | null dropExited?: (info: DropInfo) => void performDrop: (info: DropInfo) => boolean } ``` *** ### onDrop.types ```ts types: UTType[] ``` - Declares the content types this view can accept - If the dragged content does not conform to any listed type: - The drop target does not activate - `validateDrop` is not called - Visual feedback is not shown *** ### validateDrop ```ts validateDrop?: (info: DropInfo) => boolean ``` - Determines whether the drop operation should be allowed to begin - Returning `false` immediately rejects the drag - Common use cases: - Checking item count - Enforcing application state constraints Default behavior: always returns `true` *** ### dropEntered ```ts dropEntered?: (info: DropInfo) => void ``` - Called when the drag enters the drop target area - Typically used to: - Show highlight states - Display insertion placeholders - Trigger animations *** ### dropUpdated ```ts dropUpdated?: (info: DropInfo) => DropOperation | null ``` - Called repeatedly as the drag moves within the drop target - Used to dynamically specify the intended `DropOperation` Return value behavior: - Returning a `DropOperation` updates the active operation - Returning `null`: - Reuses the last valid operation - Falls back to `"copy"` if none was previously returned *** ### dropExited ```ts dropExited?: (info: DropInfo) => void ``` - Called when the drag leaves the drop target area - Commonly used to clear highlight or placeholder UI *** ### performDrop ```ts performDrop: (info: DropInfo) => boolean ``` - **The most critical callback** - Indicates that the user has released the drag and data access is permitted - Return value: - `true` if the drop was successfully handled - `false` if the drop failed #### Mandatory constraints - Within this method, you must: - Call `info.itemProviders(...)` - Immediately initiate data loading from the returned providers - You must not: - Store `ItemProvider` references for later use - Defer loading to unrelated callbacks These constraints are enforced by the operating system for security reasons. *** ## Typical Interaction Flow 1. The user initiates a drag from an `onDrag` view 2. The system checks compatibility using `onDrop.types` 3. `validateDrop` is invoked 4. The drag enters the drop target → `dropEntered` 5. The drag moves within the target → repeated `dropUpdated` 6. The drag leaves the target → `dropExited` 7. The user releases the drag → `performDrop` 8. Data is loaded and processed *** ## Design Guidelines and Best Practices - Declare UTTypes as narrowly as possible - Use `"forbidden"` in `dropUpdated` to explicitly block invalid drops - Perform heavy parsing or processing only after `ItemProvider` loading completes - Prefer system-standard UTTypes (text, image, file, URL) for cross-app drag-and-drop --- url: /TestFlight/guide/Changelog/2.4.6/onDropContent.md --- # onDropContent `onDropContent` is a view modifier provided by Scripting that allows a view to act as a **drop target**, receiving files, images, or text dragged in from other applications. *** ## Overview With `onDropContent`, you can: - Receive drag-and-drop content from other apps - Restrict acceptable content using UTType identifiers - Track whether a drag operation is hovering over the view - Start loading dropped content through `ItemProvider` - Establish persistent access to security-scoped files when needed *** ## Modifier Definition ```ts onDropContent?: { types: UTType[] isTarget: { value: boolean onChanged: (value: boolean) => void } | Observable perform: (attachments: ItemProvider[]) => boolean } ``` *** ## Parameters ### types Specifies the list of content types that the view can accept, expressed as UTType strings. If the drag operation does not contain any of the specified types: - The view does not activate as a drop target - `isTarget` does not update - `perform` is not called Example: ```ts types: ["public.image", "public.movie"] ``` *** ### isTarget Indicates whether the drag operation is currently hovering over the view. - The value is `true` when the drag enters the view’s area - The value is `false` when the drag exits the area Two forms are supported: - Binding object form ```ts { value: boolean onChanged: (value: boolean) => void } ``` - Observable form ```ts Observable ``` The observable form works well with `useObservable` and provides a more concise reactive binding. *** ### perform Called when content matching the specified `types` is dropped onto the view. ```ts perform: (attachments: ItemProvider[]) => boolean ``` - `attachments` is an array of `ItemProvider` - Each `ItemProvider` represents one dropped item - The return value indicates whether the drop was successfully handled Return value semantics: - Return `true` to indicate the drop was accepted - Return `false` to indicate the drop was not handled *** ## Execution Rules for perform The following rules must be followed inside `perform`: - Loading of `ItemProvider` contents must be **started synchronously within the execution scope of `perform`** - Asynchronous completion is allowed using `Promise` or `then` - Loading must not be initiated later from a different callback or event - If `perform` returns `false`, the system treats the drop as unhandled Reasoning: - Dropped content is protected by system security rules - Access to the dropped payload is only valid while `perform` is executing - If loading does not begin within this scope, the content may no longer be accessible *** ## Working with ItemProvider Within `perform`, you should inspect each `ItemProvider` and start loading based on its capabilities. Typical steps include: - Checking type conformance using `hasItemConforming` - Selecting an appropriate loading method - Handling files, images, or text accordingly *** ## Example Usage ```tsx const isTarget = useObservable(false) return { const images: UIImage[] = [] const videos: string[] = [] let found = false for (const attachment of attachments) { if (attachment.hasItemConforming("public.png")) { found = true attachment.loadUIImage().then(image => { if (image != null) { images.push(image) } }) } else if (attachment.hasItemConforming("public.movie")) { found = true attachment.loadFilePath("public.movie").then(filePath => { if (filePath != null) { // Create a bookmark for the security-scoped file FileManager.addFileBookmark(filePath) videos.push(filePath) } }) } } return found } }} > ... ``` *** ## Security-Scoped File Access File paths obtained via `onDropContent` are typically **security-scoped resources**. These paths may become invalid when: - `perform` returns - The app restarts - The script lifecycle ends To retain long-term access, you should create a file bookmark as soon as the path is obtained. *** ## FileManager.addFileBookmark ```ts FileManager.addFileBookmark(path: string, name?: string): string | null ``` Description: - Creates a security-scoped bookmark for a file or folder - Intended for paths obtained via APIs such as `Photos` or `onDropContent` - Returns the bookmark name, or `null` if creation fails Example: ```ts const bookmarkName = FileManager.addFileBookmark(filePath) ``` *** ## FileManager.removeFileBookmark ```ts FileManager.removeFileBookmark(name: string): boolean ``` Description: - Removes a previously created file bookmark - Should be called when access to the file is no longer needed - Returns whether the removal was successful Example: ```ts FileManager.removeFileBookmark(bookmarkName) ``` *** ## Usage Recommendations - Specify `types` as precisely as possible - Use `perform` only to start loading, not to wait for results - Load images and lightweight data as objects when appropriate - Prefer file paths for large resources such as videos or documents - Create bookmarks for files that require long-term access - Remove bookmarks when the associated files are no longer needed --- url: /TestFlight/guide/Changelog/2.4.7/AVPlayer.md --- # AVPlayer `AVPlayer` provides audio and video playback capabilities, including playback control, rate control, looping, playback state observation, and media metadata loading. You set a media source using `setSource()` (local file or remote URL), then start playback using `play()`. *** ## Getting Started ```ts const player = new AVPlayer() if (player.setSource("https://example.com/audio.mp3")) { player.onReadyToPlay = () => { player.play() } player.onEnded = () => { console.log("Playback finished") } } else { console.error("Failed to set media source") } ``` *** ## API Reference ### Properties #### `volume: number` Controls the playback volume. Range: `0.0` (muted) to `1.0` (maximum). ```ts player.volume = 0.5 ``` *** #### `duration: DurationInSeconds` The total duration of the media in seconds. This value is `0` until the media has finished loading. ```ts console.log(player.duration) ``` *** #### `currentTime: DurationInSeconds` The current playback position in seconds. You can assign a value to seek to a specific time. ```ts player.currentTime = 30 ``` *** #### `rate: number` The **current playback rate**. - `1.0` = normal speed - `< 1.0` = slower playback - `> 1.0` = faster playback This property reflects the actual playback speed while playing. ```ts player.rate = 1.25 ``` *** #### `defaultRate: number` The **default playback rate used when playback starts**. - Used when calling `play()` **without** specifying `atRate` - Changing `defaultRate` does **not** immediately affect an ongoing playback - Primarily intended to control the rate for the _next_ playback start ```ts player.defaultRate = 1.5 ``` Typical use cases: - The user selects a preferred playback speed before pressing play - Playback automatically starts at that speed next time `play()` is called *** #### `timeControlStatus: TimeControlStatus` Indicates the current playback state: - `paused` Playback is paused or has not started - `waitingToPlayAtSpecifiedRate` Waiting for playback conditions (e.g. buffering, network) - `playing` Playback is in progress *** #### `numberOfLoops: number` Controls how many times the media will loop: - `0`: no looping - positive value: loop a specific number of times - negative value: loop indefinitely ```ts player.numberOfLoops = -1 ``` *** ### Methods #### `setSource(filePathOrURL: string): boolean` Sets the media source for playback. Supports: - Local file paths - Remote URLs Returns: - `true` if the source was set successfully - `false` if it failed *** #### `play(atRate?: number): boolean` Starts playback of the current media. Playback rate resolution order: 1. If `atRate` is provided, playback starts at that rate 2. Otherwise, `defaultRate` is used During playback, you can still modify `rate` dynamically. ```ts player.play() // Uses defaultRate player.play(1.25) // Starts playback at 1.25× speed ``` Returns: - `true` if playback started successfully - `false` otherwise *** #### `pause()` Pauses playback. *** #### `stop()` Stops playback and resets the position to the beginning. *** #### `dispose()` Releases all player resources and removes internal observers. Must be called when the player is no longer needed to avoid resource leaks. *** #### `loadMetadata(): Promise` Loads the full metadata of the current media. Returns: - An array of `AVMetadataItem` - `null` if no source is set or metadata is unavailable ```ts const metadata = await player.loadMetadata() ``` *** #### `loadCommonMetadata(): Promise` Loads the _common metadata_ of the current media. Common metadata provides format-independent `commonKey` values, typically used for title, artist, album, etc. ```ts const common = await player.loadCommonMetadata() ``` *** ### Callbacks #### `onReadyToPlay?: () => void` Called when the media is ready for playback. *** #### `onTimeControlStatusChanged?: (status: TimeControlStatus) => void` Called whenever the playback state changes, such as: - waiting → playing - playing → paused *** #### `onEnded?: () => void` Called when playback finishes. *** #### `onError?: (message: string) => void` Called when a playback error occurs. Receives a descriptive error message. *** ## Audio Session Notes `AVPlayer` relies on the system’s shared audio session. You should configure and activate it before playback. ```ts await SharedAudioSession.setCategory('playback', ['mixWithOthers']) await SharedAudioSession.setActive(true) ``` Handling interruptions (e.g. phone calls): ```ts SharedAudioSession.addInterruptionListener(type => { if (type === 'began') { player.pause() } else if (type === 'ended') { player.play() } }) ``` *** ## Common Usage Examples ### Play Using the Default Rate ```ts player.defaultRate = 1.5 player.play() ``` *** ### Start Playback at a Specific Rate ```ts player.play(2.0) ``` *** ### Loop Playback ```ts player.numberOfLoops = 3 player.play() ``` *** ### Read Common Metadata ```ts const metadata = await player.loadCommonMetadata() if (metadata) { const title = metadata.find(i => i.commonKey === 'title') console.log(await title?.stringValue) } ``` *** ## Best Practices 1. **Differentiate `defaultRate` vs `rate`** - `defaultRate` affects how playback _starts_ - `rate` reflects or controls the _current_ playback speed 2. **Always Release Resources** - Call `dispose()` when playback ends or the player is no longer needed 3. **Observe Playback State** - Use `onTimeControlStatusChanged` to update loading or playing UI states 4. **Configure Audio Session Before Playing** - Prevent unexpected background, mute, or mixing behavior 5. **Metadata Timing** - Reading metadata after `onReadyToPlay` is more reliable *** ## Full Example ```ts const player = new AVPlayer() await SharedAudioSession.setCategory('playback', ['mixWithOthers']) await SharedAudioSession.setActive(true) player.defaultRate = 1.25 if (player.setSource("https://example.com/audio.mp3")) { player.onReadyToPlay = () => { player.play() } player.onEnded = () => { console.log("Playback finished") player.dispose() } player.onError = message => { console.error("Playback error:", message) player.dispose() } const metadata = await player.loadCommonMetadata() if (metadata) { const title = metadata.find(i => i.commonKey === 'title') console.log("Title:", await title?.stringValue) } } ``` --- url: /TestFlight/guide/Changelog/2.4.7/AssistantTool User-Initiated Cancellation.md --- # AssistantTool User-Initiated Cancellation To improve the user experience of long-running tools, AssistantTool introduces support for **user-initiated cancellation**. When a user cancels a tool while it is executing, developers may optionally provide an `onCancel` callback to return partially completed results. If `onCancel` is not implemented, cancellation is handled automatically by the system and no additional logic is required. This mechanism is particularly suitable for search, analysis, crawling, batch processing, and other multi-step or time-consuming tools. *** ## Capability Overview The cancellation mechanism introduces the following APIs: ```ts type OnCancel = () => string | null | undefined var onCancel: OnCancel | null | undefined const isCancelled: boolean ``` *** ## Core Semantics ### onCancel Is Optional - Implementing `onCancel` is optional - Not implementing `onCancel` is a fully valid and supported usage When `onCancel` is not set: - If the user clicks Cancel, the tool is marked as cancelled - Any results returned by the execution function after cancellation are automatically ignored - The Assistant does not consume or process those results The outcome is that developers do not need to write any additional logic to handle cancellation unless they explicitly want to return partial results. *** ### Purpose of onCancel The sole purpose of implementing `onCancel` is to proactively return **partially completed results** when a user cancels execution. It is an experience optimization, not a required responsibility. *** ## Semantics of isCancelled - `AssistantTool.isCancelled` becomes `true` immediately after the user cancels - It can be read at any point during execution - It is intended to control whether subsequent steps should continue running Typical uses include breaking loops, skipping expensive operations, and releasing resources early. *** ## When and Where to Register onCancel `onCancel` must be registered **inside the tool execution function**. Reasons: - Each tool execution has its own lifecycle - After execution completes, the system automatically resets `onCancel` to `null` - Registering `onCancel` outside the execution function has no effect Valid locations include: - `registerExecuteTool` - `registerExecuteToolWithApproval` *** ## Minimal Example Without onCancel ```ts const executeTool = async () => { await doSomethingSlow() return { success: true, message: "Operation completed." } } ``` Behavior: - If the user cancels, the tool is marked as cancelled - The returned result is automatically ignored - No additional cancellation handling is required *** ## Example With onCancel and Partial Results ```ts const executeTool = async () => { const partialResults: string[] = [] AssistantTool.onCancel = () => { return [ "Operation was cancelled by the user.", "Partial results:", ...partialResults ].join("\n") } for (const item of items) { if (AssistantTool.isCancelled) break const result = await process(item) partialResults.push(result) } return { success: true, message: partialResults.join("\n") } } ``` Behavior: - When the user cancels, `onCancel` is invoked immediately - Partially completed results are returned to the Assistant - Subsequent execution stops based on `isCancelled` *** ## Typical Use Cases Good candidates for implementing `onCancel` include: - Multi-source search and aggregation - Crawling and parsing multiple documents - Project-wide scanning and analysis - Batch computation or generation tasks - Long-running reasoning or processing pipelines Cases where `onCancel` is usually unnecessary include: - Tools that complete almost instantly - Operations with no meaningful intermediate results - Tasks where cancellation produces no useful partial output *** ## Return Value Guidelines for onCancel Return values from `onCancel` follow these rules: - Returning a `string` The string is used as the cancellation message sent to the Assistant - Returning `null` or `undefined` Indicates that no message should be returned (valid but generally not recommended) Recommended practices: - Clearly state that the operation was cancelled by the user - Explicitly label partial output as partial results when applicable *** ## Relationship to the Approval Flow - `onCancel` applies only during the execution phase - It does not affect the Approval Request phase - It works for both manually approved tools and auto-approved tools Important distinction: - `secondaryConfirmed` Indicates the user declined execution during the approval phase - `onCancel` Indicates the user cancelled during execution after approval *** ## Common Mistakes and Caveats ### Registering onCancel Outside the Execution Function This is ineffective because the execution context has already ended. *** ### Performing Expensive or Side-Effect Operations in onCancel `onCancel` should return quickly and must not initiate network requests, file writes, or other side effects. *** ### Ignoring isCancelled During Execution Even with `onCancel` implemented, long-running logic should explicitly check `isCancelled` to avoid unnecessary resource usage. *** ## Recommended Execution Structure ```ts const executeTool = async () => { const partial = [] AssistantTool.onCancel = () => { return formatPartialResult(partial) } for (const item of items) { if (AssistantTool.isCancelled) break const r = await process(item) partial.push(r) } if (AssistantTool.isCancelled) { return } return { success: true, message: formatFinalResult(partial) } } ``` *** ## Design Philosophy - User cancellation is a normal interaction, not an error - `onCancel` is an optional enhancement, not a requirement - Not implementing `onCancel` does not break tool behavior - The system automatically ignores results after cancellation - Implement `onCancel` only to provide a more graceful completion experience *** ## Summary - AssistantTool supports user-initiated cancellation during execution - `onCancel` allows returning partially completed results - `isCancelled` enables early termination of execution logic - Cancellation is safely handled even without custom logic - Developers are not required to manage cancellation explicitly --- url: /TestFlight/guide/Changelog/2.4.7/Device.md --- # Device The `Device` namespace provides access to information about the current device and its environment, including hardware characteristics, system details, screen metrics, battery status, orientation, proximity sensor state, locale and language settings, wake lock control, and network interfaces. This API is commonly used to adapt UI layouts, behavior, and feature availability based on the device’s runtime context. *** ## Orientation Represents the physical orientation of the device. ```ts type Orientation = | "portrait" | "portraitUpsideDown" | "landscapeLeft" | "landscapeRight" | "faceUp" | "faceDown" | "unknown" ``` ### Description - `portrait`: Portrait orientation, default upright position - `portraitUpsideDown`: Portrait orientation, upside down - `landscapeLeft`: Landscape orientation rotated to the left - `landscapeRight`: Landscape orientation rotated to the right - `faceUp`: Device is lying flat with the screen facing upward - `faceDown`: Device is lying flat with the screen facing downward - `unknown`: Orientation cannot be determined *** ## InterfaceOrientation Represents the supported interface orientations for the app. ```ts type InterfaceOrientation = | "portrait" | "portraitUpsideDown" | "landscape" | "landscapeLeft" | "landscapeRight" | "all" | "allButUpsideDown" ``` ### Description - `portrait`: Portrait orientation, default upright position - `portraitUpsideDown`: Portrait orientation, upside down - `landscape`: Landscape orientation, either left or right - `landscapeLeft`: Landscape orientation rotated to the left - `landscapeRight`: Landscape orientation rotated to the right - `all`: All supported orientations - `allButUpsideDown`: All supported orientations except upside down *** ## NetworkInterface Describes a single network interface address. ```ts type NetworkInterface = { address: string netmask: string | null family: "IPv4" | "IPv6" mac: string | null isInternal: boolean cidr: string | null } ``` ### Properties - `address`: IP address - `netmask`: Subnet mask - `family`: Address family, either IPv4 or IPv6 - `mac`: MAC address (may be null depending on system restrictions) - `isInternal`: Indicates whether the interface is internal (for example, loopback) - `cidr`: CIDR notation, such as `192.168.1.10/24` *** ## BatteryState Represents the current battery state. ```ts type BatteryState = "full" | "charging" | "unplugged" | "unknown" ``` ### Description - `full`: Battery is fully charged - `charging`: Device is currently charging - `unplugged`: Device is not connected to power - `unknown`: Battery state cannot be determined *** ## Device Information ### model ```ts const model: string ``` The device model, such as `"iPhone"` or `"iPad"`. *** ### localizedModel ```ts const localizedModel: string ``` The localized name of the device model. *** ### systemVersion ```ts const systemVersion: string ``` The current operating system version, for example `"18.2"`. *** ### systemName ```ts const systemName: string ``` The name of the operating system, such as `"iOS"`, `"iPadOS"`, or `"macOS"`. *** ### isiPad / isiPhone ```ts const isiPad: boolean const isiPhone: boolean ``` Indicates whether the current device is an iPad or an iPhone. *** ### screen ```ts const screen: { width: number height: number scale: number } ``` Screen metrics: - `width`: Screen width in logical pixels - `height`: Screen height in logical pixels - `scale`: Screen scale factor (for example, 2 or 3) *** ## Battery and Sensors ### batteryState ```ts const batteryState: BatteryState ``` The current battery state. *** ### batteryLevel ```ts const batteryLevel: number ``` The current battery level, expressed as a value between `0.0` and `1.0`. *** ### proximityState ```ts const proximityState: boolean ``` The state of the proximity sensor. `true` indicates that the device is close to the user, such as during a phone call. *** ## Orientation and Layout ### isLandscape / isPortrait / isFlat ```ts const isLandscape: boolean const isPortrait: boolean const isFlat: boolean ``` - `isLandscape`: Indicates whether the device is in a landscape orientation - `isPortrait`: Indicates whether the device is in a portrait orientation - `isFlat`: Indicates whether the device is lying flat (face up or face down) *** ### orientation ```ts const orientation: Orientation ``` The current physical orientation of the device. *** ### supportedInterfaceOrientations ```ts var supportedInterfaceOrientations: InterfaceOrientation[] ``` The list of supported interface orientations. You can set this property to limit the orientations that your page supports. #### Example ```tsx function Page() { useEffect(() => { Device.supportedInterfaceOrientations = ["all"] return () => { Device.supportedInterfaceOrientations = ["portrait"] } }, []) return ... } ``` *** ## Appearance and Environment ### colorScheme ```ts const colorScheme: ColorScheme ``` The current system color scheme, such as light or dark mode. *** ### isiOSAppOnMac ```ts const isiOSAppOnMac: boolean ``` Indicates whether the current process is an iPhone or iPad app running on macOS. *** ## Locale and Language ### systemLocale ```ts const systemLocale: string ``` The current system locale, for example `"en_US"`. *** ### preferredLanguages ```ts const preferredLanguages: string[] ``` The user’s preferred languages, for example: ```ts ["en-US", "zh-Hans-CN"] ``` *** ### systemLocales (Deprecated) ```ts const systemLocales: string[] ``` Deprecated. Use `preferredLanguages` instead. *** ### systemLanguageTag ```ts const systemLanguageTag: string ``` The current language tag, such as `"en-US"`. *** ### systemLanguageCode ```ts const systemLanguageCode: string ``` The current language code, such as `"en"`. *** ### systemCountryCode ```ts const systemCountryCode: string | undefined ``` The current country code, such as `"US"`. *** ### systemScriptCode ```ts const systemScriptCode: string | undefined ``` The script code of the current locale, such as `"Hans"` for Simplified Chinese. *** ## Wake Lock ### isWakeLockEnabled ```ts const isWakeLockEnabled: Promise ``` Retrieves whether the wake lock is currently enabled, preventing the device from automatically sleeping. *** ### setWakeLockEnabled ```ts function setWakeLockEnabled(enabled: boolean): void ``` Enables or disables the wake lock. Notes: - Available only in the **Scripting app** - When enabled, the device will remain awake and not auto-lock *** ## Battery Listeners ### addBatteryStateListener ```ts function addBatteryStateListener( callback: (state: BatteryState) => void ): void ``` Registers a listener for battery state changes. *** ### removeBatteryStateListener ```ts function removeBatteryStateListener( callback?: (state: BatteryState) => void ): void ``` Removes a battery state listener. If `callback` is not provided, all battery state listeners are removed. *** ### addBatteryLevelListener ```ts function addBatteryLevelListener( callback: (level: number) => void ): void ``` Registers a listener for battery level changes. *** ### removeBatteryLevelListener ```ts function removeBatteryLevelListener( callback?: (level: number) => void ): void ``` Removes a battery level listener. If `callback` is not provided, all battery level listeners are removed. *** ## Orientation Listeners ### addOrientationListener ```ts function addOrientationListener( callback: (orientation: Orientation) => void ): void ``` Starts observing device orientation changes. Notes: - This method must be called before orientation updates are delivered - Orientation updates do not work when system orientation lock is enabled *** ### removeOrientationListener ```ts function removeOrientationListener( callback?: (orientation: Orientation) => void ): void ``` Removes an orientation change listener. If `callback` is not provided, all orientation listeners are removed and observation is stopped. *** ## Proximity Listeners ### addProximityStateListener ```ts function addProximityStateListener( callback: (state: boolean) => void ): void ``` Registers a listener for proximity sensor state changes. *** ### removeProximityStateListener ```ts function removeProximityStateListener( callback?: (state: boolean) => void ): void ``` Removes a proximity state listener. If `callback` is not provided, all proximity listeners are removed. *** ## Network ### networkInterfaces ```ts function networkInterfaces(): Record ``` Returns the network interfaces available on the device. Return value: - Keys are interface names (such as `en0`, `lo0`) - Values are arrays of `NetworkInterface` objects associated with each interface This method is useful for network diagnostics, retrieving local IP addresses, and debugging connectivity issues. --- url: /TestFlight/guide/Changelog/2.4.7/Location.md --- # Location The Location API provides access to the device’s geographic location, geocoding services, system map location picking, and heading (compass) information. It is available as a global API in Scripting and can be used directly without importing any modules. The API respects system permissions, user-selected accuracy levels, and platform limitations, and is suitable for scripts, interactive views, and supported widget scenarios. ## LocationAccuracy Defines the desired accuracy level for location data. **Type Definition** ```ts type LocationAccuracy = | "best" | "tenMeters" | "hundredMeters" | "kilometer" | "threeKilometers" | "bestForNavigation" | "reduced" ``` **Description** - `best` Requests the highest accuracy available on the device. - `tenMeters` Requests approximately 10-meter accuracy. - `hundredMeters` Requests approximately 100-meter accuracy. - `kilometer` Requests approximately 1-kilometer accuracy. - `threeKilometers` Requests coarse accuracy within approximately 3 kilometers. - `bestForNavigation` Optimized for navigation use cases, with higher update frequency and power consumption. - `reduced` Requests reduced-accuracy location data, typically used when the user has granted approximate location access. ## LocationInfo Represents a geographic coordinate with an associated timestamp. **Type Definition** ```ts type LocationInfo = { latitude: number longitude: number timestamp: number } ``` **Properties** - `latitude` Latitude in degrees. - `longitude` Longitude in degrees. - `timestamp` Time when the location was recorded, in milliseconds since the Unix epoch. ## LocationPlacemark Provides a human-readable description of a geographic location, usually returned by geocoding operations. **Type Definition** ```ts type LocationPlacemark = { location?: LocationInfo region?: string timeZone?: string name?: string thoroughfare?: string subThoroughfare?: string locality?: string subLocality?: string administrativeArea?: string subAdministrativeArea?: string postalCode?: string isoCountryCode?: string country?: string inlandWater?: string ocean?: string areasOfInterest?: string[] } ``` **Description** A placemark may include address components, administrative regions, country information, and points of interest. Field availability depends on the location and system map data. ## Heading Represents compass and orientation information derived from the device’s sensors. **Type Definition** ```ts type Heading = { headingAccuracy: number trueHeading: number magneticHeading: number timestamp: Date x: number y: number z: number } ``` **Properties** - `headingAccuracy` Maximum deviation, in degrees, between the reported heading and the true geomagnetic heading. - `trueHeading` Heading relative to true north, in degrees. - `magneticHeading` Heading relative to magnetic north, in degrees. - `timestamp` Time at which the heading was measured. - `x`, `y`, `z` Raw geomagnetic field values for the three axes, measured in microteslas. ## Authorization and Configuration ### isAuthorizedForWidgetUpdates ```ts const isAuthorizedForWidgetUpdates: boolean ``` Indicates whether the current widget is eligible to receive location updates. ### accuracy ```ts const accuracy: LocationAccuracy ``` The currently configured desired location accuracy. ### setAccuracy ```ts function setAccuracy(accuracy: LocationAccuracy): Promise ``` Sets the desired accuracy level for subsequent location requests. Higher accuracy may increase power consumption and require additional permissions. **Example** ```ts await Location.setAccuracy("hundredMeters") ``` ## Requesting Location ### requestCurrent ```ts function requestCurrent( options?: { forceRequest?: boolean } ): Promise ``` Requests the device’s current location. By default, a cached location is returned immediately if available. If no cached location exists, a new location request is performed. When `forceRequest` is set to `true`, any cached location is ignored and a fresh request is always made. **Example** ```ts const location = await Location.requestCurrent() if (location) { console.log(location.latitude, location.longitude) } ``` Forcing a fresh location request: ```ts const location = await Location.requestCurrent({ forceRequest: true }) ``` ### pickFromMap ```ts function pickFromMap(): Promise ``` Presents the system map interface and allows the user to manually select a location. **Example** ```ts const picked = await Location.pickFromMap() if (picked) { console.log("Picked location:", picked.latitude, picked.longitude) } ``` ## Geocoding ### reverseGeocode ```ts function reverseGeocode(options: { latitude: number longitude: number locale?: string }): Promise ``` Converts a geographic coordinate into human-readable address information. **Example** ```ts const placemarks = await Location.reverseGeocode({ latitude: 39.9042, longitude: 116.4074, locale: "en-US" }) console.log(placemarks?.[0]?.locality) ``` ### geocodeAddress ```ts function geocodeAddress(options: { address: string locale?: string }): Promise ``` Converts a textual address into geographic placemark results. **Example** ```ts const results = await Location.geocodeAddress({ address: "Times Square", locale: "en-US" }) const location = results?.[0]?.location ``` ## Heading and Compass ### requestHeading ```ts function requestHeading(): Promise ``` Returns the most recently reported heading. If heading updates have never been started, the result is `null`. **Example** ```ts const heading = await Location.requestHeading() if (heading) { console.log(heading.trueHeading) } ``` ### startUpdatingHeading ```ts function startUpdatingHeading(): Promise ``` Starts continuous heading updates. ### stopUpdatingHeading ```ts function stopUpdatingHeading(): void ``` Stops heading updates and releases related system resources. ### addHeadingListener ```ts function addHeadingListener( listener: (heading: Heading) => void ): void ``` Registers a listener that is called whenever the heading changes. **Example** ```ts await Location.startUpdatingHeading() Location.addHeadingListener(heading => { console.log("Heading:", heading.trueHeading) }) ``` ### removeHeadingListener ```ts function removeHeadingListener( listener?: (heading: Heading) => void ): void ``` Removes a previously registered heading listener. If no listener is provided, all heading listeners are removed. --- url: /TestFlight/guide/Changelog/2.4.7/SQLite/Database and Connection.md --- # Database and Connection This section describes how to open SQLite databases, configure connections, and understand connection-related behaviors in Scripting. SQLite is exposed as a **global namespace** in Scripting and can be used directly without importing. *** ## Opening a Database ### Opening a Disk Database ```ts const dbPath = Path.join( FileManager.appGroupDocumentsDirectory, "app.db" ) const db = SQLite.open(dbPath) ``` `SQLite.open` opens or creates a SQLite database located in the script’s data directory. - If the database file does not exist, it will be created automatically - Opening the same path multiple times will internally reuse database resources - The returned `Database` instance is used for all subsequent operations *** ### Opening an In-Memory Database ```ts const db = SQLite.openInMemory("temp") ``` `SQLite.openInMemory` creates a database that exists only in memory. - In-memory databases are not persisted to disk - All data is lost when the script ends or the database is released - Suitable for temporary computations, testing, or intermediate results The `name` parameter is used to distinguish different in-memory database instances. *** ## Database Configuration An optional configuration object can be provided when opening a database: ```ts const db = SQLite.open(dbPath, { foreignKeysEnabled: true, readonly: false, journalMode: "wal", busyMode: 5, maximumReaderCount: 5, label: "main-db" }) ``` *** ### foreignKeysEnabled ```ts foreignKeysEnabled: boolean ``` Controls whether foreign key constraints are enabled. - `true`: Enables SQLite foreign key constraints (equivalent to `PRAGMA foreign_keys = ON`) - `false`: Disables foreign key constraints It is recommended to explicitly enable this option when using foreign keys. *** ### readonly ```ts readonly: boolean ``` Controls whether the database is opened in read-only mode. - `true`: Only read operations are allowed; all write operations will fail - `false`: Both read and write operations are allowed Read-only mode is useful for: - Data inspection tools - Analysis scripts - Preventing accidental data modification *** ### journalMode ```ts journalMode: "wal" | "default" ``` Specifies the SQLite journal mode. - `"wal"`: Enables Write-Ahead Logging, suitable for concurrent read/write workloads - `"default"`: Uses SQLite’s default journal mode In most scenarios, `"wal"` is recommended. *** ### busyMode ```ts busyMode: "immediateError" | number ``` Controls the behavior when the database is locked. - `"immediateError"`: Immediately throws an error if the database is busy - `number`: Maximum time to wait for the lock to be released, in seconds Example: ```ts busyMode: 3 ``` This configuration allows the database to wait up to 3 seconds before failing. *** ### maximumReaderCount ```ts maximumReaderCount: number ``` Limits the maximum number of concurrent reader connections. This option is used to control concurrency and resource usage: - Lower values reduce resource consumption - Higher values allow greater read concurrency *** ### label ```ts label: string | null ``` Assigns a human-readable label to the database connection. This label is primarily used for: - Debugging - Logging - Internal diagnostics It does not affect database behavior. *** ## Database Instance Both `SQLite.open` and `SQLite.openInMemory` return a `Database` instance: ```ts const db: Database ``` The `Database` instance: - Represents a logical database connection - Serves as the entry point for SQL execution, transactions, and schema operations - Does not expose underlying connections, threads, or queues Connection creation, lifecycle management, and thread scheduling are handled internally. *** ## Concurrency and Threading Model Concurrency control and thread management are handled entirely by Scripting: - JavaScript code does not directly interact with threads - All database operations are executed through internal queues with controlled concurrency - Configuration options such as `busyMode` and `maximumReaderCount` influence internal scheduling behavior This design ensures that the SQLite API remains: - Predictable - Safe to use without manual locking - Free from cross-thread access issues at the script level *** ## Usage Recommendations - Use disk databases for persistent data storage - Use in-memory databases for temporary or test data - Explicitly enable `foreignKeysEnabled` when foreign key constraints are required - Use `"wal"` journal mode for concurrent workloads - Consider read-only mode for inspection or analysis scripts *** ## Next Steps After opening a database, typical next steps include: - Executing SQL statements and querying data - Performing batch writes using transactions - Creating and managing tables and indexes Continue with the following sections: - **Executing SQL & Queries** - **Transactions** - **Schema Management** --- url: /TestFlight/guide/Changelog/2.4.7/SQLite/Executing SQL and Query.md --- # Executing SQL and Query This section explains how to execute SQL statements, bind parameters, and query data using SQLite. All operations are performed through a `Database` instance. SQLite handles connection management, threading, and scheduling internally, allowing JavaScript code to focus solely on SQL and data. *** ## Executing SQL ### execute ```ts db.execute(sql: string, arguments?: Arguments): Promise ``` `execute` runs one or multiple SQL statements that do not return result sets. It is commonly used for: - Creating or modifying table schemas - Inserting, updating, or deleting data - Executing PRAGMA statements Example: ```ts await db.execute( "CREATE TABLE user (id INTEGER PRIMARY KEY, name TEXT, age INTEGER)" ) ``` ```ts await db.execute( "UPDATE user SET age = ? WHERE name = ?", [19, "Tom"] ) ``` *** ## Parameter Binding SQLite supports two parameter binding styles: **positional parameters** and **named parameters**. *** ### Positional Parameters ```ts await db.execute( "INSERT INTO user (name, age) VALUES (?, ?)", ["Tom", 18] ) ``` Values in the argument array are bound to `?` placeholders in order. *** ### Named Parameters ```ts await db.execute( "INSERT INTO user (name, age) VALUES (:name, :age)", { name: "Lucy", age: 20 } ) ``` Named parameters are passed as an object. The object keys must match the parameter names used in the SQL statement. *** ### DatabaseValue Types Bound values may be of the following types: - `string` - `number` - `boolean` - `Data` - `Date` - `null` `Date` and `Data` values are stored using SQLite-compatible representations. *** ## Querying Data SQLite provides three query methods, each intended for a different use case. *** ### fetchAll ```ts db.fetchAll(sql: string, arguments?: Arguments): Promise ``` Executes a query and returns **all result rows**. Example: ```ts const users = await db.fetchAll<{ name: string; age: number }>( "SELECT name, age FROM user" ) ``` If the query returns no rows, an empty array is returned. *** ### fetchOne ```ts db.fetchOne(sql: string, arguments?: Arguments): Promise ``` Executes a query and returns **the first result row**. Example: ```ts const user = await db.fetchOne<{ name: string }>( "SELECT name FROM user WHERE age = ?", [18] ) ``` Typical use cases include: - Queries expected to return exactly one row (for example, by primary key) - Aggregate queries such as `COUNT(*)` If the query returns no rows, this method throws an error. *** ### fetchSet ```ts db.fetchSet(sql: string, arguments?: Arguments): Promise ``` Executes a query and returns a **deduplicated result set**. This method is useful when: - Querying for unique values - Logical de-duplication is required at the result level Example: ```ts const names = await db.fetchSet<{ name: string }>( "SELECT name FROM user" ) ``` Duplicate rows are removed from the returned result. *** ## Type Mapping Query result values are automatically mapped to JavaScript types: - SQLite INTEGER → `number` - SQLite REAL → `number` - SQLite TEXT → `string` - SQLite BLOB → `Data` - SQLite NULL → `null` The shape of the returned objects is determined by the SQL query. SQLite does not enforce strict matching with the generic type `T`, but keeping them aligned is recommended for clarity and type safety. *** ## Error Handling The following situations may cause methods to throw errors: - SQL syntax errors - Parameter count or name mismatches - Constraint violations (for example, unique or foreign key constraints) - `fetchOne` returning no rows - Database lock timeouts caused by `busyMode` Use `try / catch` when error handling is required: ```ts try { await db.execute("INSERT INTO user (name) VALUES (?)", ["Tom"]) } catch (e) { console.error(e) } ``` *** ## Usage Recommendations - Use `execute` for SQL statements that do not return results - Use `fetchAll` when multiple rows are expected - Use `fetchOne` when exactly one row is required - Use `fetchSet` when deduplicated results are needed - Always prefer parameter binding over SQL string concatenation - Wrap complex write operations in transactions *** ## Next Steps When atomicity is required across multiple write operations, or when changes must be rolled back on failure, continue with: - **Transactions** --- url: /TestFlight/guide/Changelog/2.4.7/SQLite/Overview.md --- # Overview The SQLite module provides a structured, type-friendly, and predictable API for working with SQLite databases in Scripting. SQLite is exposed as a **global namespace** and does not require importing. It supports both disk-based and in-memory databases, and covers common use cases such as executing SQL statements, performing queries, managing transactions, defining schemas, and inspecting database structures. *** ## Getting Started ### Opening a Database ```ts const dbPath = Path.join(FileManager.appGroupDocumentsDirectory, "app.db") const db = SQLite.open(dbPath) ``` Opens a SQLite database located in the script’s data directory. If the file does not exist, it will be created automatically. To open an in-memory database: ```ts const db = SQLite.openInMemory("temp") ``` *** ### Executing SQL ```ts await db.execute( "CREATE TABLE user (id INTEGER PRIMARY KEY, name TEXT, age INTEGER)" ) await db.execute( "INSERT INTO user (name, age) VALUES (?, ?)", ["Tom", 18] ) ``` Both positional parameters and named parameters are supported. *** ### Querying Data ```ts const users = await db.fetchAll<{ name: string; age: number }>( "SELECT name, age FROM user" ) const user = await db.fetchOne<{ name: string }>( "SELECT name FROM user WHERE age = ?", [18] ) ``` *** ### Using Transactions ```ts await db.transcation([ { sql: "INSERT INTO user (name, age) VALUES (?, ?)", args: ["Tom", 18] }, { sql: "INSERT INTO user (name, age) VALUES (?, ?)", args: ["Lucy", 20] } ]) ``` Transactions are declared as an ordered list of steps. All steps are executed sequentially, and any failure will cause the entire transaction to roll back. *** ## Core Capabilities The SQLite module provides the following core capabilities: - Database connection management - SQL execution with parameter binding - Structured data querying - Explicit transaction control - Table and index creation and removal - Database schema inspection *** ## Documentation Structure The SQLite documentation is organized by functional areas. Refer to the following sections based on your needs: - **Database & Connection** Opening databases, configuration options, read-only mode, and concurrency behavior - **Executing SQL & Queries** Executing SQL statements, parameter binding rules, and query APIs - **Transactions** Transaction model, transaction kinds, and design constraints - **Schema Management** Creating and managing tables and indexes using structured APIs - **Schema Introspection** Inspecting tables, columns, primary keys, foreign keys, and indexes - **Types Reference** Type definitions and enums used throughout the SQLite module *** ## Use Cases The SQLite module is suitable for the following scenarios: - Local data persistence - Script-level caching and state storage - Managing small to medium-sized structured datasets - Building SQLite-based utility scripts - Data migration, inspection, and debugging tools --- url: /TestFlight/guide/Changelog/2.4.7/SQLite/Schema Introspection.md --- # Schema Introspection This section explains how to inspect and analyze database structures using SQLite’s schema introspection APIs. Schema Introspection allows scripts to query database structure information at runtime, including tables, columns, primary keys, foreign keys, and indexes. These capabilities are commonly used for: - Data migrations and version management - Tooling scripts (such as database browsers or analyzers) - Runtime schema validation - Debugging and diagnostics *** ## Getting the Schema Version ### schemaVersion ```ts db.schemaVersion(): Promise ``` Returns the current database schema version. This value is typically used to: - Determine whether a database migration is required - Integrate with external schema versioning logic Example: ```ts const version = await db.schemaVersion() ``` *** ## Checking Table Existence ### tableExists ```ts db.tableExists(tableName: string, schemaName?: string): Promise ``` Checks whether a specified table exists. - `schemaName` defaults to the main schema - Returns `true` if the table exists Example: ```ts const exists = await db.tableExists("user") ``` *** ## Inspecting Columns ### columnsIn ```ts db.columnsIn(tableName: string, schemaName?: string): Promise ``` Returns structural information for all columns in the specified table. *** ### ColumnInfo ```ts type ColumnInfo = { name: string type: string defaultValueSQL: string | null isNotNull: boolean primaryKeyIndex: number } ``` Field descriptions: - `name`: column name - `type`: column type - `defaultValueSQL`: SQL expression for the default value - `isNotNull`: whether the column is NOT NULL - `primaryKeyIndex`: order of the column within the primary key (0 if not part of the primary key) Example: ```ts const columns = await db.columnsIn("user") ``` *** ## Inspecting Primary Keys ### primaryKey ```ts db.primaryKey(tableName: string, schemaName?: string): Promise ``` Returns primary key information for the specified table. *** ### PrimaryKeyInfo ```ts type PrimaryKeyInfo = { columns: string[] rowIDColumn: string | null isRowID: boolean } ``` Field descriptions: - `columns`: names of the primary key columns - `rowIDColumn`: name of the associated rowid column, if applicable - `isRowID`: whether SQLite’s implicit rowid is used as the primary key Example: ```ts const pk = await db.primaryKey("user") ``` *** ## Inspecting Foreign Keys ### foreignKeys ```ts db.foreignKeys(tableName: string, schemaName?: string): Promise ``` Returns all foreign key definitions for the specified table. *** ### ForeignKeyInfo ```ts type ForeignKeyInfo = { id: number originColumns: string[] destinationTable: string destinationColumns: string[] mapping: { origin: string destination: string }[] } ``` Field descriptions: - `id`: foreign key identifier - `originColumns`: columns in the current table - `destinationTable`: referenced table - `destinationColumns`: referenced columns - `mapping`: one-to-one column mapping Example: ```ts const fks = await db.foreignKeys("order") ``` *** ## Inspecting Indexes ### indexes ```ts db.indexes(tableName: string, schemaName?: string): Promise ``` Returns all indexes defined on the specified table. *** ### IndexInfo ```ts type IndexInfo = { name: string columns: string[] isUnique: boolean origin: "createIndex" | "primaryKeyConstraint" | "uniqueConstraint" } ``` Field descriptions: - `name`: index name - `columns`: columns included in the index - `isUnique`: whether the index is unique - `origin`: source of the index - `"createIndex"`: created explicitly via `createIndex` - `"primaryKeyConstraint"`: generated by a primary key constraint - `"uniqueConstraint"`: generated by a unique constraint Example: ```ts const indexes = await db.indexes("user") ``` *** ## Checking Unique Key Combinations ### isTableHasUniqueKeys ```ts db.isTableHasUniqueKeys( tableName: string, uniqueKeys: string[] ): Promise ``` Checks whether the specified table has a unique constraint or unique index that exactly matches the given column combination. Example: ```ts const hasUnique = await db.isTableHasUniqueKeys( "user", ["email"] ) ``` This method is commonly used to: - Determine whether a unique index needs to be created - Avoid redefining constraints during schema initialization or migration *** ## Usage Recommendations - Inspect existing schema state before applying structural changes - Prefer introspection over assumptions in migration logic - Combine introspection APIs with tooling scripts for analysis or visualization - Avoid calling schema inspection APIs in high-frequency execution paths *** ## Summary Schema Introspection provides runtime visibility into database structure, enabling scripts to safely and reliably understand the current database state. It is commonly used in combination with: - **Schema Management** for defining and modifying structures - **Transactions** to ensure atomic schema changes - **Executing SQL & Queries** for data operations based on known schemas --- url: /TestFlight/guide/Changelog/2.4.7/SQLite/Schema Management.md --- # Schema Management This section explains how to create, modify, and remove tables and indexes using SQLite’s structured schema APIs. Instead of relying solely on raw SQL strings, Schema Management provides a **declarative, readable, and safer** approach to defining database structures, making it suitable for long-term, maintainable data models. *** ## Creating Tables ### createTable ```ts db.createTable( name: string, options: { columns: ColumnDefinition[] ifNotExists?: boolean } ): Promise ``` `createTable` creates a new table. Example: ```ts await db.createTable("user", { ifNotExists: true, columns: [ { name: "id", type: "INTEGER", primaryKey: true, autoIncrement: true }, { name: "name", type: "TEXT", notNull: true }, { name: "age", type: "INTEGER" } ] }) ``` *** ### ColumnDefinition ```ts type ColumnDefinition = { name: string type: string primaryKey?: boolean autoIncrement?: boolean notNull?: boolean unique?: boolean indexed?: boolean checkSQL?: string collation?: DatabaseCollation defaultValue?: DatabaseValue defaultSQL?: string references?: ColumnReferences } ``` Common column attributes include: - `name`: column name - `type`: SQLite column type (for example, `INTEGER`, `TEXT`) - `primaryKey`: whether the column is a primary key - `autoIncrement`: enables auto-increment (only valid for integer primary keys) - `notNull`: applies a NOT NULL constraint - `unique`: applies a UNIQUE constraint - `indexed`: creates an index on the column - `checkSQL`: CHECK constraint expression - `collation`: column collation rule - `defaultValue`: default value (parameterized form) - `defaultSQL`: default value (raw SQL expression) - `references`: foreign key reference definition *** ### Default Values `defaultValue` and `defaultSQL` are mutually exclusive. Use one or the other. ```ts { name: "createdAt", type: "INTEGER", defaultSQL: "CURRENT_TIMESTAMP" } ``` ```ts { name: "status", type: "TEXT", defaultValue: "active" } ``` *** ### Foreign Key References ```ts references?: { table: string column?: string onDelete?: "cascade" | "restrict" | "setNull" | "setDefault" onUpdate?: "cascade" | "restrict" | "setNull" | "setDefault" deferred?: boolean } ``` Example: ```ts { name: "userId", type: "INTEGER", references: { table: "user", column: "id", onDelete: "cascade" } } ``` *** ## Renaming Tables ### renameTable ```ts db.renameTable(name: string, newName: string): Promise ``` Example: ```ts await db.renameTable("user", "users") ``` *** ## Dropping Tables ### dropTable ```ts db.dropTable(name: string): Promise ``` Example: ```ts await db.dropTable("temp_data") ``` *** ## Creating Indexes ### createIndex ```ts db.createIndex( name: string, options: { table: string columns: string[] unique?: boolean ifNotExists?: boolean condition?: string } ): Promise ``` Example: ```ts await db.createIndex("idx_user_name", { table: "user", columns: ["name"], unique: false }) ``` *** ### Partial Indexes ```ts await db.createIndex("idx_active_user", { table: "user", columns: ["name"], condition: "age >= 18" }) ``` *** ## Dropping Indexes ### dropIndex ```ts db.dropIndex(name: string): Promise ``` Example: ```ts await db.dropIndex("idx_user_name") ``` *** ### dropIndexOn ```ts db.dropIndexOn(tableName: string, columns: string[]): Promise ``` Drops the index associated with the specified table and column combination. Example: ```ts await db.dropIndexOn("user", ["name"]) ``` *** ## Design Principles The Schema Management API follows these principles: - **Structure over string concatenation** Reduces the risk of SQL syntax errors and improves readability - **Declarative over imperative** Clearly describes what the schema should be, not how to assemble SQL - **One-to-one mapping with SQLite capabilities** Avoids hidden behavior or unexpected abstractions *** ## Usage Recommendations - Prefer `createTable` for long-lived schemas - Use `ifNotExists` to avoid duplicate creation - Create indexes explicitly for frequently queried columns - Ensure `foreignKeysEnabled` is enabled when using foreign keys - Perform schema changes within transactions or migration logic *** ## Next Steps After creating and managing schemas, you may want to: - Inspect existing database structures - Retrieve column, primary key, and foreign key information - Analyze indexes and constraints Continue with: - **Schema Introspection** --- url: /TestFlight/guide/Changelog/2.4.7/SQLite/Transcation.md --- # Transcation This section describes the transaction model, transaction types. The SQLite transaction API uses a **step-based, declarative model** to provide predictable, controlled, and safe transaction behavior at the scripting level. *** ## Transaction Overview A transaction groups multiple database operations into a single atomic unit: - All steps succeed → the transaction is committed - Any step fails → the transaction is rolled back - After rollback, the database remains unchanged SQLite guarantees consistency and isolation internally. JavaScript code does not need to manually handle rollback logic. *** ## transcation ```ts db.transcation( steps: TranscationStep[], options?: { kind?: "deferred" | "immediate" | "exclusive" } ): Promise ``` `transcation` executes a transaction defined by an ordered list of SQL steps. *** ## Transaction Steps ```ts type TranscationStep = { sql: string args?: Arguments | null } ``` Each transaction step consists of: - `sql`: the SQL statement to execute - `args`: optional bound parameters Example: ```ts await db.transcation([ { sql: "INSERT INTO user (name, age) VALUES (?, ?)", args: ["Tom", 18] }, { sql: "INSERT INTO user (name, age) VALUES (?, ?)", args: ["Lucy", 20] } ]) ``` Steps are executed sequentially in the order they are declared. *** ## Transaction Kinds Transactions support three kinds, corresponding to SQLite’s native transaction modes: ```ts kind?: "deferred" | "immediate" | "exclusive" ``` *** ### deferred The default transaction kind. - No locks are acquired when the transaction begins - Locks are obtained only when the first read or write occurs - Suitable for most general-purpose transaction scenarios *** ### immediate - Attempts to acquire a write lock immediately when the transaction begins - Fails immediately if the write lock cannot be acquired - Useful when subsequent write operations must be guaranteed to proceed *** ### exclusive - Acquires an exclusive lock when the transaction begins - Blocks all other read and write operations - Intended for scenarios that require full database exclusivity *** ## Error Handling and Rollback A transaction will be rolled back automatically if any of the following occur: - SQL execution errors - Parameter binding errors - Constraint violations (unique, foreign key, etc.) - Database lock conflicts Example: ```ts try { await db.transcation([ { sql: "INSERT INTO user (id, name) VALUES (1, 'Tom')" }, { sql: "INSERT INTO user (id, name) VALUES (1, 'Lucy')" } ]) } catch (e) { console.error("Transaction failed:", e) } ``` *** ## Usage Recommendations - Group operations that must succeed together into a single transaction - Prefer the default `deferred` transaction kind - Use `immediate` when early write-lock acquisition is required - Avoid long-running or unrelated work inside transactions - Do not rely on conditional logic to determine transaction steps *** ## Next Steps After working with transactions, you may want to: - Create and manage table schemas - Define indexes and constraints - Inspect database schema information Continue with: - **Schema Management** - **Schema Introspection** --- url: /TestFlight/guide/Changelog/2.4.7/VideoRecorder/Quick Start/index.md --- # Quick Start `VideoRecorder` is a high-level video capture and recording module provided by Scripting. It encapsulates complex AVFoundation details such as `AVCaptureSession` management, audio/video synchronization, orientation handling, encoding, and pause/resume timelines, and exposes a **state-driven**, script-friendly API. Typical use cases include: - Video recording with pause and resume - Synchronized audio capture - High frame rate and high bitrate recording - ProRes video encoding - Capturing photos during video recording - Runtime camera control (focus, exposure, zoom, torch) - UI-agnostic preview rendering via a separate preview view *** ## Design Principles ### State-Driven Architecture `VideoRecorder` is governed by a strict internal state machine. All public APIs are validated against the current state to prevent invalid transitions and undefined behavior. ```ts type State = | "idle" | "preparing" | "ready" | "recording" | "paused" | "stopping" | "finished" | "failed" ``` State definitions: - **idle** Initial state or fully reset. No active capture session. - **preparing** Capture session, devices, and encoders are being configured. - **ready** Fully prepared and ready to start recording. - **recording** Actively recording video (and audio if enabled). - **paused** Recording timeline is paused; media writing is suspended. - **stopping** Recording is ending and the output file is being finalized. - **finished** Recording completed successfully. `details` contains the output file path. - **failed** An error occurred. `details` contains the error message. *** ## Capture Session ```ts class AVCaptureSession { private constructor() } ``` `VideoRecorder.session` exposes a read-only `AVCaptureSession` instance managed internally by `VideoRecorder`. This session is intended for: - Attaching preview views - Integration with other components that require direct access to the capture session The session **cannot be instantiated or modified directly**. *** ## Recorder Configuration ```ts type Configuration = { camera?: { position: "front" | "back" preferredTypes?: CameraType[] } frameRate?: number audioEnabled?: boolean sessionPreset?: SessionPreset videoCodec?: VideoCodec videoBitRate?: number orientation?: VideoOrientation mirrorFrontCamera?: boolean autoConfigAppAudioSession?: boolean } ``` ### Camera Selection - `position` Selects the front or back camera. - `preferredTypes` A prioritized list of physical camera types, such as: - `"wide"` - `"ultraWide"` - `"telephoto"` - `"triple"` If not specified, the system automatically selects an appropriate camera for the chosen position. *** ### Frame Rate Supported frame rates: - 24 - 30 (default) - 60 - 120 (device-dependent) *** ### Audio Recording ```ts audioEnabled?: boolean ``` Enables or disables audio recording. Defaults to `true`. *** ### Session Preset Controls capture resolution and quality, for example: - `"high"` - `"hd1920x1080"` - `"hd4K3840x2160"` *** ### Video Codec Supported codecs include: - `"hevc"` (default) - `"h264"` - `"hevcWithAlpha"` - `"proRes422"` - `"proRes4444"` - `"appleProRes4444XQ"` - `"proResRAW"` Availability depends on device capabilities and OS version. *** ### Video Bit Rate ```ts videoBitRate?: number ``` Specifies the target video bitrate in bits per second. Default value is `5_000_000`. Only applies to certain codecs. *** ### Orientation ```ts orientation?: "portrait" | "landscapeLeft" | "landscapeRight" ``` Defines the recording orientation and affects both pixel buffers and output metadata. *** ### Front Camera Mirroring ```ts mirrorFrontCamera?: boolean ``` Mirrors the front camera image if set to `true`. Defaults to `false`. *** ### Audio Session Management ```ts autoConfigAppAudioSession?: boolean ``` - `true` (default) The system automatically configures the shared `AVAudioSession` for optimal recording. The original audio session state is **not restored** after recording. - `false` The app is responsible for configuring `AVAudioSession`. Incompatible settings may cause recording to fail. *** ## State Access and Observation ### Get Current State ```ts function getState(): Promise ``` Returns the current state of the recorder. *** ### State Change Listener ```ts function addStateListener( listener: (state: State, details?: string) => void ): void ``` - `state` The new recorder state. - `details` - For `"failed"`: error description - For `"finished"`: output file path ```ts function removeStateListener( listener?: (state: State, details?: string) => void ): void ``` If no listener is provided, all listeners are removed. *** ## Recording Lifecycle ### Prepare ```ts function prepare(configuration?: Configuration): Promise ``` - Creates and configures the capture session - Requests camera and microphone permissions - Initializes encoders Transitions to the `ready` state upon success. *** ### Start Recording ```ts function start(toPath: string): Promise ``` - Begins recording - Writes output to the specified file path - Transitions to `recording` *** ### Pause and Resume ```ts function pause(): Promise function resume(): Promise ``` - Pauses and resumes the recording timeline - Does not create separate files - Suitable for long-form or segmented recording *** ### Stop Recording ```ts function stop(options?: { closeSession?: boolean }): Promise ``` - Finalizes the recording - Transitions to `finished` - `details` contains the output file path *** ### Cancel Recording ```ts function cancel(options?: { closeSession?: boolean }): Promise ``` - Aborts recording - Deletes the output file - Does not enter the `finished` state *** ### Reset Recorder ```ts function reset(): Promise ``` - Closes the capture session - Clears all internal state - Returns to the `idle` state Typically used when switching cameras or fully releasing resources. *** ## Photo Capture During Recording ```ts function takePhoto(): Promise ``` - Only valid while in the `recording` state - Captures a still image from the current video stream - Does not interrupt recording *** ## Camera Controls ### Torch (Flashlight) ```ts const hasTorch: boolean const torchMode: "auto" | "on" | "off" function setTorchMode(mode: "auto" | "on" | "off"): void ``` *** ### Focus and Exposure ```ts function setFocusPoint(point: { x: number; y: number }): void function setExposurePoint(point: { x: number; y: number }): void function resetFocus(): void function resetExposure(): void ``` - Coordinates are **normalized** (0.0–1.0) - `{ x: 0, y: 0 }` corresponds to the top-left corner *** ### Zoom Control ```ts const minZoomFactor: number const maxZoomFactor: number const currentZoomFactor: number function setZoomFactor(factor: number): void function rampZoomFactor(toFactor: number, rate: number): void function resetZoom(): void ``` On iOS 18 and later: ```ts const displayZoomFactor: number const displayZoomFactorMultiplier: number ``` These values are intended for user-friendly zoom display in the UI. *** ## Typical Usage Flow ```ts await VideoRecorder.prepare(config) await VideoRecorder.start(path) // recording in progress await VideoRecorder.pause() await VideoRecorder.resume() await VideoRecorder.stop() // or await VideoRecorder.cancel() await VideoRecorder.reset() ``` *** ## Usage Notes and Best Practices - A full lifecycle follows: `prepare → start → stop | cancel` - Switching cameras during `recording` is not recommended; call `reset` first - High frame rates and ProRes codecs require significant performance and storage - When disabling automatic audio session configuration, ensure compatibility manually - Preview rendering is decoupled from recording logic and should be handled separately --- url: /TestFlight/guide/Changelog/2.4.7/VideoRecorder/Quick Start/index_example.md --- # Example ```tsx import { Button, Navigation, NavigationStack, Script, useEffect, Path, MagnifyGesture, useObservable, VideoRecorderPreviewView, VStack, Toolbar, ToolbarItem, ToolbarItemGroup } from "scripting" const recorder = VideoRecorder function View() { // Access dismiss function. const dismiss = Navigation.useDismiss() const state = useObservable("idle") const displayZoom = useObservable(1) const startZoom = useObservable(1) const volume = useObservable(0) const toastVisible = useObservable(false) const toastMessage = useObservable("") const position = useObservable("back") function showToast(message: string) { toastVisible.setValue(true) toastMessage.setValue(message) } useEffect(() => { const listener = (value: number, old: number) => { console.log("old:", old, "new:", value) volume.setValue(value) } SharedAudioSession.addOutputVolumeListener(listener) return () => { SharedAudioSession.removeOutputVolumeListener(listener) } }, []) async function prepare() { await recorder.prepare({ camera: { position: position.value, // preferredTypes: ["triple"] }, frameRate: 30, audioEnabled: true, orientation: "portrait", sessionPreset: "high", videoCodec: "appleProRes4444XQ", // autoConfigAppAudioSession: false }) } useEffect(() => { prepare().then(() => { recorder.start( Path.join( FileManager.documentsDirectory, "test.mov" ) ) }).catch(e => { showToast("Failed to prepare:" + String(e)) }) recorder.addStateListener(( newState, details ) => { state.setValue(newState) if (newState === "ready") { // recorder.rampZoomFactor(0.5, 4 } if (newState === "failed") { Dialog.alert(details!) } }) return () => { recorder.reset() } }, []) return `) await webView.present({ navigationTitle: 'WebView Demo' }) webView.dispose() ``` --- url: /TestFlight/guide/Intent/Intent.continueInForeground.md --- `Intent.continueInForeground` is an API that leverages the **iOS 26+ AppIntents framework** to request the system to bring the **Scripting app** to the foreground while a Shortcut is running. This method is used when a script—invoked from Shortcuts—requires full UI interaction within the Scripting app (for example: presenting a form, editing content, picking files, showing a full screen navigation flow, etc.). When invoked: - The system displays a dialog asking the user to continue the workflow in the app. - If the user **confirms**, the system opens Scripting in the foreground and the script continues. - If the user **cancels**, the script terminates immediately. Because this is a system-level capability of AppIntents: **This API requires iOS 26 or later.** *** # API Definition ```ts function continueInForeground( dialog?: Dialog | null, options?: { alwaysConfirm?: boolean; } ): Promise; ``` ## Parameters ### `dialog?: Dialog | null` An optional message explaining why the workflow needs to continue in the foreground. `Dialog` supports four formats: ```ts type Dialog = | string | { full: string; supporting: string } | { full: string; supporting: string; systemImageName: string } | { full: string; systemImageName: string } ``` Examples: ```ts "Do you want to continue in the app?" ``` ```ts { full: "Continue in the Scripting app?", supporting: "The next step requires full UI interaction.", systemImageName: "app" } ``` Passing `null` will suppress the dialog entirely (not recommended unless you fully understand the UX implications). *** ### `options?: { alwaysConfirm?: boolean }` Controls whether the system should always ask for confirmation: - `alwaysConfirm: false` _(default)_ The system may decide whether confirmation is needed based on context. - `alwaysConfirm: true` The system always presents the confirmation dialog. *** # Execution Behavior When called inside `intent.tsx`: 1. The Shortcut pauses execution. 2. The system presents a confirmation dialog. 3. If the user accepts: - The Scripting app opens in the foreground. - The script continues executing after the `await`. 4. If the user cancels: - The entire script is terminated immediately. This mirrors the behavior of Apple’s AppIntents `continueInApp()` functionality for system apps. *** # Common Use Cases Use `continueInForeground` when the next step **cannot** run in the background, including: - Presenting a full-screen UI (`Navigation.present`) - Editing content in a custom form or navigation stack - Selecting files or interacting with UI components - Scenarios requiring user input or multi-step flows - Showing UI unavailable to background extensions It should **not** be used for simple data processing or non-interactive tasks. *** # Full Code Example Below is the full working example demonstrating how `continueInForeground` enables a Shortcut to transfer execution into the Scripting app and then return UI input back to Shortcuts. ```tsx // intent.tsx import { Button, Intent, List, Navigation, NavigationStack, Script, Section, TextField, useState } from "scripting" function View() { const dismiss = Navigation.useDismiss() const [text, setText] = useState("") return

} async function runIntent() { // Step 1: Ask the user to continue in the foreground app await Intent.continueInForeground( "Do you want to open the app and continue?" ) // Step 2: Present UI inside the Scripting app const text = await Navigation.present( ) // Step 3: Optionally go back to Shortcuts Safari.openURL("shortcuts://") // Step 4: Return the result to Shortcuts Script.exit( Intent.text( text ?? "No text return" ) ) } runIntent() ``` *** # Notes and Recommendations 1. **Requires iOS 26+** Do not call this API on older systems. 2. **Use dialogs to explain why foreground interaction is required** This improves user trust and Shortcuts clarity. 3. **Always handle the cancellation case** If the user cancels, your script stops. Avoid assuming foreground UI will always appear. 4. **Foreground UI must be meaningful** Only use this API when the upcoming step truly requires UI. 5. **Can be combined with SnippetIntent (iOS 26+)** For workflows that mix in-Shortcut Snippet UI with in-app full UI. *** # Summary `Intent.continueInForeground` enables scripts invoked from Shortcuts to request foreground execution when UI interaction is required. It is: - Based on iOS 26 AppIntents capabilities - A system-confirmed context switch - Essential for workflows involving full UI interactions - Safely integrated via a structured `Dialog` system This method allows Scripting to support advanced automation flows that seamlessly transition between Shortcuts and the full Scripting app UI. --- url: /TestFlight/guide/Intent/Intent.requestConfirmation.md --- `Intent.requestConfirmation` pauses script execution and asks the user to confirm an action through a **system-managed confirmation UI**. The confirmation interface consists of: - A **SnippetIntent UI** (provided by you) - Optional dialog text (system-generated or developer-defined) Behavior: - If the user **confirms**, the script continues (Promise resolves). - If the user **cancels**, the script terminates immediately. - The UI is fully managed by the system. - The presented UI is defined by the provided SnippetIntent’s `perform()` return. **This API is only available on iOS 26 or later.** *** # API Definition ```ts Intent.requestConfirmation( actionName: ConfirmationActionName, snippetIntent: AppIntent, options?: { dialog?: Dialog; showDialogAsPrompt?: boolean; } ): Promise ``` *** # Parameter Details ## actionName: ConfirmationActionName A semantic keyword describing the type of action being confirmed. Apple uses this value to generate natural language around the confirmation UI. Accepted values include: ``` "add" | "addData" | "book" | "buy" | "call" | "checkIn" | "continue" | "create" | "do" | "download" | "filter" | "find" | "get" | "go" | "log" | "open" | "order" | "pay" | "play" | "playSound" | "post" | "request" | "run" | "search" | "send" | "set" | "share" | "start" | "startNavigation" | "toggle" | "turnOff" | "turnOn" | "view" ``` Examples: - `"set"` → “Do you want to set…?” - `"buy"` → “Do you want to buy…?” - `"toggle"` → “Do you want to toggle…?” Choosing the correct semantic verb improves the clarity of the user-facing dialog. *** ## snippetIntent: SnippetIntent This must be an AppIntent registered with: ```ts protocol: AppIntentProtocol.SnippetIntent; ``` The UI displayed in the confirmation step **comes from this SnippetIntent’s `perform()` return**, which must be a TSX-based `VirtualNode`. This is what the user sees and interacts with during confirmation. *** ## `options?: { dialog?: Dialog; showDialogAsPrompt?: boolean }` ### dialog?: Dialog Optional text describing the confirmation request. Supports four formats: ```ts type Dialog = | string | { full: string; supporting: string } | { full: string; supporting: string; systemImageName: string } | { full: string; systemImageName: string }; ``` Examples: ```ts "Are you sure you want to continue?"; ``` More structured version: ```ts { full: "Set this color?", supporting: "This will update the theme color used across the app.", systemImageName: "paintpalette" } ``` Use this to clearly explain what the user is confirming. *** ### showDialogAsPrompt?: boolean - Default: `true` The system shows the dialog as a modal prompt. - `false` The dialog may be integrated directly inside the Snippet card instead of a separate prompt. *** # Execution Flow When the script executes: ```ts await Intent.requestConfirmation(...) ``` The following occurs: 1. Script execution is paused. 2. The system displays: - The SnippetIntent UI - Optional dialog text 3. The user chooses: - **Confirm** → Promise resolves → script continues - **Cancel** → script stops immediately 4. The system handles UI presentation and dismissal automatically. There is no need to manually manage the UI lifecycle. *** # Usage Scenarios Recommended for: - Confirming important changes (colors, appearance, configurations) - Confirming destructive or irreversible actions - Steps requiring explicit user approval - Initiating subflows requiring UI preview or choice (e.g., color picker, item selector) - Sensitive operations (e.g., updating settings, performing actions with side effects) Not recommended for: - Actions that do not require user approval - Simple background data processing *** # Complete Example Below is a full working example demonstrating how to request user confirmation using a SnippetIntent. It assumes you have two SnippetIntent AppIntents: - `PickColorIntent` — allows user to select a color - `ShowResultIntent` — displays the final result ## intent.tsx ```tsx import { Intent, Script } from "scripting"; import { PickColorIntent, ShowResultIntent } from "./app_intents"; async function runIntent() { // Step 1: Ask the user to confirm the action via a Snippet UI await Intent.requestConfirmation("set", PickColorIntent(), { dialog: { full: "Are you sure you want to set this color?", supporting: "This will update the theme color used by your app.", systemImageName: "paintpalette", }, }); // Step 2: Read input from Shortcuts const text = Intent.shortcutParameter?.type === "text" ? Intent.shortcutParameter.value : "No text parameter from Shortcuts"; // Step 3: Return another SnippetIntent result const snippet = Intent.snippetIntent({ snippetIntent: ShowResultIntent({ content: text }), }); Script.exit(snippet); } runIntent(); ``` *** # Notes & Best Practices - **Requires iOS 26+** — do not call this API on earlier versions. - Always include a clear **dialog** message to improve user understanding. - Use for actions that require explicit approval or confirmation. - When possible, combine with SnippetIntent to provide a richer preview UI. - Scripts terminate automatically when the user cancels; do not rely on cleanup code afterward. - Avoid calling it unnecessarily; only use when confirmation is truly meaningful. --- url: /TestFlight/guide/Intent/Quick Start.md --- # Quick Start Scripting allows you to define custom iOS Intents using an `intent.tsx` file. These scripts can receive input from the iOS share sheet or the Shortcuts app and return structured results. With optional UI presentation, you can create interactive workflows that process data and deliver output dynamically. *** ## 1. Creating and Configuring an Intent ### 1.1 Create an Intent Script 1. Create a new script project in the Scripting app. 2. Add a file named `intent.tsx` to the project. 3. Define your logic and optionally a UI component inside the file. ### 1.2 Configure Supported Input Types Tap the project title in the editor’s title bar to open **Intent Settings**, then select supported input types: - Text - Images - File URLs - URLs This configuration enables your script to appear in the share sheet or Shortcuts when matching input is provided. *** ## 2. Accessing Input Data Inside `intent.tsx`, use the `Intent` API to access input values. | Property | Description | | -------------------------- | ----------------------------------------------------------------------------------- | | `Intent.shortcutParameter` | A single parameter passed from the Shortcuts app, with `.type` and `.value` fields. | | `Intent.textsParameter` | Array of text strings. | | `Intent.urlsParameter` | Array of URL strings. | | `Intent.imagesParameter` | Array of image file paths (UIImage objects). | | `Intent.fileURLsParameter` | Array of local file URL paths. | Example: ```ts if (Intent.shortcutParameter) { if (Intent.shortcutParameter.type === "text") { console.log(Intent.shortcutParameter.value) } } ``` *** ## 3. Returning a Result Use `Script.exit(result)` to return a result to the caller, such as the Shortcuts app or another script. Valid return types include: - Plain text: `Intent.text(value)` - Attributed text: `Intent.attributedText(value)` - URL: `Intent.url(value)` - JSON: `Intent.json(value)` - File path or file URL: `Intent.file(value)` or `Intent.fileURL(value)` Example: ```ts import { Script, Intent } from "scripting" Script.exit(Intent.text("Done")) ``` *** ## 4. Displaying Interactive UI Use `Navigation.present()` to show a UI before returning a result. You can render a React-style component and then call `Script.exit()` after the interaction completes. Example: ```ts import { Intent, Script, Navigation, VStack, Text } from "scripting" function MyIntentView() { return ( {Intent.textsParameter?.[0]} ) } async function run() { await Navigation.present({ element: }) Script.exit() } run() ``` *** ## 5. Using Intents in the Share Sheet If a script supports a specific input type (e.g., text or image), it will automatically appear as an option in the iOS share sheet: 1. Select content such as text or a file. 2. Tap the Share button. 3. Choose **Scripting** in the share sheet. 4. Scripting will list scripts that support the selected input type. *** ## 6. Using Intents in the Shortcuts App You can call scripts from the Shortcuts app with or without UI: - **Run Script**: Executes the script in the background. - **Run Script in App**: Executes the script in the foreground, with UI presentation support. Steps: 1. Open the Shortcuts app and create a new shortcut. 2. Add the **Run Script** or **Run Script in App** action from Scripting. 3. Choose the target script and pass input parameters if needed. *** ## 7. Intent API Reference ### `Intent` Properties | Property | Type | Description | | ------------------- | ------------------- | ----------------------------------------------- | | `shortcutParameter` | `ShortcutParameter` | Input from Shortcuts with `.type` and `.value`. | | `textsParameter` | `string[]` | Array of input text values. | | `urlsParameter` | `string[]` | Array of input URLs. | | `imagesParameter` | `UIImage[]` | Array of image file paths or objects. | | `fileURLsParameter` | `string[]` | Array of input file paths (local file URLs). | ### `Intent` Methods | Method | Return Type | Example | | ------------------------------ | --------------------------- | -------------------------------------- | | `Intent.text(value)` | `IntentTextValue` | `Intent.text("Hello")` | | `Intent.attributedText(value)` | `IntentAttributedTextValue` | `Intent.attributedText("Styled Text")` | | `Intent.url(value)` | `IntentURLValue` | `Intent.url("https://example.com")` | | `Intent.json(value)` | `IntentJsonValue` | `Intent.json({ key: "value" })` | | `Intent.file(path)` | `IntentFileValue` | `Intent.file("/path/to/file.txt")` | | `Intent.fileURL(path)` | `IntentFileURLValue` | `Intent.fileURL("/path/to/file.pdf")` | | `Intent.image(UIImage)` | `IntentImageValue` | `Intent.image(uiImage)` | *** ## 8. Best Practices and Notes - Always call `Script.exit()` to properly terminate the script and return a result. - When displaying a UI, ensure `Navigation.present()` is awaited before calling `Script.exit()`. - Use **"Run Script in App"** for large files or images to avoid process termination due to memory constraints. - You can use `queryParameters` when launching scripts via URL scheme if additional data is needed. --- url: /TestFlight/guide/Intent/SnippetIntent.md --- SnippetIntent is a special kind of AppIntent whose purpose is to render **interactive Snippet UI cards** inside the Shortcuts app (iOS 26+). Key characteristics: 1. Must be registered in `app_intents.tsx` 2. Must specify `protocol: AppIntentProtocol.SnippetIntent` 3. `perform()` **must return a VirtualNode (TSX UI)** 4. Must be returned via `Intent.snippetIntent()` 5. Must be invoked from the Shortcuts action **“Show Snippet Intent”** 6. SnippetIntent is ideal for building interactive, step-based UI inside a Shortcut It is not a data-returning Intent; it is exclusively for UI rendering in Shortcuts. *** # 2. System Requirements **SnippetIntent requires iOS 26 or later.** On iOS versions earlier than 26: - `Intent.snippetIntent()` is not available - `Intent.requestConfirmation()` cannot be used - The Shortcuts action “Show Snippet Intent” does not exist - SnippetIntent-type AppIntents cannot be invoked by Shortcuts *** # 3. Registering a SnippetIntent (app\_intents.tsx) Example: ```tsx export const PickColorIntent = AppIntentManager.register({ name: "PickColorIntent", protocol: AppIntentProtocol.SnippetIntent, perform: async () => { return } }) ``` Another SnippetIntent: ```tsx export const ShowResultIntent = AppIntentManager.register({ name: "ShowResultIntent", protocol: AppIntentProtocol.SnippetIntent, perform: async ({ content }: { content: string }) => { return } }) ``` Requirements: - `protocol` **must** be `AppIntentProtocol.SnippetIntent` - `perform()` **must** return a TSX UI (VirtualNode) - SnippetIntent cannot return non-UI types such as text, numbers, JSON, or file paths *** # 4. Wrapping SnippetIntent Return Values — `Intent.snippetIntent` A SnippetIntent cannot be passed directly to `Script.exit()`. It must be wrapped in a `IntentSnippetIntentValue`. ```tsx const snippetValue = Intent.snippetIntent( ShowResultIntent({ content: "Example Text" }) ) Script.exit(snippetValue) ``` ### Type Definition ```ts type SnippetIntentValue = { value?: IntentAttributedTextValue | IntentFileURLValue | IntentJsonValue | IntentTextValue | IntentURLValue | IntentFileValue | null snippetIntent: AppIntent } declare class IntentSnippetIntentValue extends IntentValue< 'SnippetIntent', SnippetIntentValue > { value: SnippetIntentValue type: 'SnippetIntent' } ``` This wrapper makes the return value compatible with the Shortcuts “Show Snippet Intent” action. *** # 5. Snippet Confirmation UI — `Intent.requestConfirmation` iOS 26 Snippet Framework provides built-in confirmation UI driven by SnippetIntent. ### API ```ts Intent.requestConfirmation( actionName: ConfirmationActionName, intent: AppIntent, options?: { dialog?: Dialog; showDialogAsPrompt?: boolean; } ): Promise ``` ### ConfirmationActionName A predefined list of semantic action names used by system UI: ``` "add" | "addData" | "book" | "buy" | "call" | "checkIn" | "continue" | "create" | "do" | "download" | "filter" | "find" | "get" | "go" | "log" | "open" | "order" | "pay" | "play" | "playSound" | "post" | "request" | "run" | "search" | "send" | "set" | "share" | "start" | "startNavigation" | "toggle" | "turnOff" | "turnOn" | "view" ``` ### Example ```tsx await Intent.requestConfirmation( "set", PickColorIntent() ) ``` Execution behavior: - Displays a Snippet UI for confirmation - If the user confirms → Promise resolves and script continues - If the user cancels → execution stops (system-driven behavior) *** # 6. The “Show Snippet Intent” Action in Shortcuts (iOS 26+) iOS 26 adds a new Shortcuts action: **Show Snippet Intent** This action is the only correct way to display SnippetIntent UI. ### Comparison with Other Scripting Actions | Shortcuts Action | UI Shown | Supports SnippetIntent | Usage | | ----------------------------- | ------------------------------ | ---------------------- | ------------------- | | Run Script | None | No | Background logic | | Run Script in App | Fullscreen UI inside Scripting | No | Rich app-level UI | | Show Snippet Intent (iOS 26+) | Snippet card UI | Yes | SnippetIntent flows | ### Usage 1. Add “Show Snippet Intent” in Shortcuts 2. Select a Scripting script project 3. The script must return `Intent.snippetIntent(...)` 4. Shortcuts renders the UI in a Snippet card *** # 7. IntentMemoryStorage — Cross-Intent State Store ## Why It Exists Every AppIntent execution runs in an isolated environment: - After an AppIntent `perform()` completes → its execution context is destroyed - After a script calls `Script.exit()` → the JS context is destroyed This means local variables **cannot persist between AppIntent calls**. Snippet flows commonly involve: PickColor → SetColor → ShowResult Therefore a cross-Intent state mechanism is required. *** ## IntentMemoryStorage API ```ts namespace IntentMemoryStorage { function get(key: string): T | null function set(key: string, value: any): void function remove(key: string): void function contains(key: string): boolean function clear(): void function keys(): string[] } ``` ### Purpose - Store small pieces of shared data across multiple AppIntents - Works during the entire Shortcut flow - Ideal for selections, temporary configuration, or intent-to-intent handoff ### Example ```ts IntentMemoryStorage.set("color", "systemBlue") const color = IntentMemoryStorage.get("color") ``` ### Guidelines Not recommended for large data. For large data: - Use `Storage` (persistent key-value store) - Or save files via `FileManager` in `appGroupDocumentsDirectory` IntentMemoryStorage should be treated as **temporary, lightweight state**. *** # 8. Full Example Combining All Features (iOS 26+) ## app\_intents.tsx ```tsx export const SetColorIntent = AppIntentManager.register({ name: "SetColorIntent", protocol: AppIntentProtocol.AppIntent, perform: async (color: Color) => { IntentMemoryStorage.set("color", color) } }) export const PickColorIntent = AppIntentManager.register({ name: "PickColorIntent", protocol: AppIntentProtocol.SnippetIntent, perform: async () => { return } }) export const ShowResultIntent = AppIntentManager.register({ name: "ShowResultIntent", protocol: AppIntentProtocol.SnippetIntent, perform: async ({ content }: { content: string }) => { const color = IntentMemoryStorage.get("color") ?? "systemBlue" return } }) ``` ## intent.tsx ```tsx async function runIntent() { // 1. Ask the user to confirm setting the color via Snippet await Intent.requestConfirmation( "set", PickColorIntent() ) // 2. Read Shortcuts input const textContent = Intent.shortcutParameter?.type === "text" ? Intent.shortcutParameter.value : "No text parameter from Shortcuts" // 3. Create final SnippetIntent UI const snippetIntentValue = Intent.snippetIntent({ snippetIntent: ShowResultIntent({ content: textContent }) }) Script.exit(snippetIntentValue) } runIntent() ``` ## Shortcuts Flow 1. User provides text 2. “Show Snippet Intent” runs the script 3. Script displays PickColorIntent confirmation UI via requestConfirmation 4. After confirmation, displays ShowResultIntent Snippet UI 5. Uses IntentMemoryStorage to persist the selected color *** # 9. Summary This document introduces all **new** Scripting features added for iOS 26+: 1. **SnippetIntent** - Registered using `AppIntentManager` - Returns TSX UI - Requires iOS 26+ 2. **Intent.snippetIntent** - Wraps a SnippetIntent for Script.exit 3. **Intent.requestConfirmation** - Presents a confirmation Snippet UI - Requires SnippetIntent 4. **“Show Snippet Intent” action in Shortcuts** - Required to display SnippetIntent UI 5. **IntentMemoryStorage** - Lightweight cross-AppIntent storage - Not suitable for large binary/content data - Complements multi-step Snippet flows --- url: /TestFlight/guide/Interactive Widget and LiveActivity.md --- # Interactive Widget and LiveActivity The **Scripting** app supports adding interactivity to **widgets** and **LiveActivity**, allowing you to create dynamic and interactive UIs using `Button` and `Toggle` components. These controls can execute **AppIntents** to trigger actions, making your widgets and live activities more powerful. *** ## 1. Introduction to AppIntents ### What are AppIntents? An **AppIntent** defines a specific action that can be triggered by a control (e.g., a `Button` or `Toggle`) in a widget or LiveActivity UI. AppIntents enable seamless interaction and functionality by linking UI components with executable logic. ### Supported Protocols AppIntents can implement the following protocols: - **`AppIntent`**: General-purpose intents for triggering custom actions. - **`AudioPlaybackIntent`**: Handles audio playback (e.g., play, pause, or toggle audio states). - **`AudioRecordingIntent`**: Manages audio recording states (requires iOS 18+ and a LiveActivity to stay active during recording). - **`LiveActivityIntent`**: Modifies or manages LiveActivity states. *** ## 2. Registering an AppIntent To use an **AppIntent**, it must first be registered in the `app_intents.tsx` file using the `AppIntentManager.register` method. ### Example: Registering AppIntents ```typescript // app_intents.tsx import { AppIntentManager, AppIntentProtocol } from "scripting" // Register an AppIntent const IntentWithoutParams = AppIntentManager.register({ name: "IntentWithoutParams", protocol: AppIntentProtocol.AppIntent, perform: async (params: undefined) => { // Perform a custom action console.log("Intent triggered") // Optionally reload widgets Widget.reloadAll() } }) // Register an AppIntent with parameters const ToggleIntentWithParams = AppIntentManager.register({ name: "ToggleIntentWithParams", protocol: AppIntentProtocol.AudioPlaybackIntent, perform: async (audioName: string) => { // Perform action based on the parameter console.log(`Toggling audio playback for: ${audioName}`) Widget.reloadAll() } }) ``` *** ## 3. Using AppIntents in Widgets or LiveActivity UIs After registering an AppIntent, it can be linked to interactive components like `Button` and `Toggle` in your `widget.tsx` or LiveActivity UI file. ### Example: Using AppIntents in a Widget ```typescript // widget.tsx import { VStack, Button, Toggle } from "scripting" import { IntentWithoutParams, ToggleIntentWithParams } from "./app_intents" import { model } from "./model" function WidgetView() { return ( ] }} trailingSwipeActions={{ actions: [ , ] }} /> )} } async function run() { await Navigation.present({ element: }) Script.exit() } run() ``` --- url: /TestFlight/guide/View Modifiers/Symbol Style.md --- # Symbol Style These modifiers allow you to customize how **SF Symbols** are displayed and animated inside views, particularly with the `Image` component. *** ## `symbolRenderingMode` Sets the **rendering mode** for symbol images within the view. ### Type ```ts symbolRenderingMode?: SymbolRenderingMode ``` ### Options (`SymbolRenderingMode`) - `"monochrome"` – A single-color version using the foreground style - `"hierarchical"` – Multiple layers with different opacities for depth (good for semantic coloring) - `"multicolor"` – Uses the symbol's built-in colors - `"palette"` – Allows layered tinting (like using multiple `foregroundStyle` layers) ### Example ```tsx ``` ### Explanation: - `symbolRenderingMode="palette"` tells the system to render the symbol in **multiple layered styles**. - `foregroundStyle` now uses an object with `primary`, `secondary`, and optionally `tertiary` layers to color those symbol layers individually. > This matches SwiftUI's behavior with `.symbolRenderingMode(.palette)` and `.foregroundStyle(primary, secondary, tertiary)`. *** ## `symbolVariant` Displays the symbol with a particular **visual variant**. ### Type ```ts symbolVariant?: SymbolVariants ``` ### Options (`SymbolVariants`) - `"none"` – Default symbol with no variant - `"circle"` – Encapsulated in a circle - `"square"` – Encapsulated in a square - `"rectangle"` – Encapsulated in a rectangle - `"fill"` – Filled symbol - `"slash"` – Adds a slash over the symbol (often used to indicate "off" states) ### Example ```tsx ``` *** ## `symbolEffect` Applies a **symbol animation effect** to the view. This can include transitions (appear/disappear), scale, bounce, rotation, breathing, pulsing, and wiggle effects. You can also bind the effect to a value so it animates when the value changes. ### Type ```ts symbolEffect?: SymbolEffect ``` There are two forms of usage: *** ### 1. **Simple effects** (transition, scale, etc.) You can directly assign a symbol effect name: #### Examples ```tsx ``` *** ### 2. **Value-bound discrete effects** These effects animate when the associated value changes. #### Type ```ts symbolEffect?: { effect: DiscreteSymbolEffect value: string | number | boolean } ``` #### Example ```tsx ``` In this example, each time `isFavorited` changes, the bounce animation is triggered. *** ## Available Discrete Effects (`DiscreteSymbolEffect`) These effects can be bound to values: | Category | Effects | | ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Bounce** | `bounce`, `bounceByLayer`, `bounceDown`, `bounceUp`, `bounceWholeSymbol` | | **Breathe** | `breathe`, `breatheByLayer`, `breathePlain`, `breathePulse`, `breatheWholeSymbol` | | **Pulse** | `pulse`, `pulseByLayer`, `pulseWholeSymbol` | | **Rotate** | `rotate`, `rotateByLayer`, `rotateClockwise`, `rotateCounterClockwise`, `rotateWholeSymbol` | | **VariableColor** | `variableColor`, `variableColorCumulative`, `variableColorDimInactiveLayers`, `variableColorHideInactiveLayers`, `variableColorIterative` | | **Wiggle** | `wiggle`, `wiggleByLayer`, `wiggleWholeSymbol`, `wiggleLeft`, `wiggleRight`, `wiggleUp`, `wiggleDown`, `wiggleForward`, `wiggleBackward`, `wiggleClockwise`, `wiggleCounterClockwise` | *** ## Full Example ```tsx ``` This image uses: - a hierarchical rendering mode - a circular variant around the symbol - a pulsing animation bound to `isNotified` state *** ## Summary | Modifier | Description | | --------------------- | ----------------------------------------------------------------------- | | `symbolRenderingMode` | Sets how SF Symbols are rendered (monochrome, multicolor, etc.) | | `symbolVariant` | Applies a visual variant like `fill`, `circle`, or `slash` | | `symbolEffect` | Adds visual animation effects; can be static or bound to a state change | --- url: /TestFlight/guide/View Modifiers/Text Field.md --- # Text Field The following modifiers customize the behavior and appearance of `TextField` components. These allow you to control keyboard behavior, input handling, and submission logic. *** ## `onSubmit` Adds an action to perform when the user submits a value from the text field. ### Type ```ts onSubmit?: (() => void) | { triggers: SubmitTriggers action: () => void } ``` ### Behavior - If provided as a function: ```tsx console.log('Submitted')} /> ``` This is equivalent to: ```tsx console.log('Submitted') }} /> ``` - You can explicitly define what kind of submission should trigger the action using the `triggers` option: ```tsx console.log('Search submitted') }} /> ``` ### `SubmitTriggers` values: - `"text"`: Triggered by text input views like `TextField`, `SecureField`, etc. - `"search"`: Triggered by search fields (e.g., those using the `searchable` modifier). *** ## `keyboardType` Specifies the type of keyboard to display when the text field is focused. ### Type ```ts keyboardType?: KeyboardType ``` ### Options - `'default'` - `'numberPad'` - `'phonePad'` - `'namePhonePad'` - `'URL'` - `'decimalPad'` - `'asciiCapable'` - `'asciiCapableNumberPad'` - `'emailAddress'` - `'numbersAndPunctuation'` - `'twitter'` - `'webSearch'` ### Example ```tsx ``` *** ## `autocorrectionDisabled` Controls whether the system autocorrection is enabled. ### Type ```ts autocorrectionDisabled?: boolean ``` ### Default - `true` — autocorrection is disabled by default. ### Example ```tsx ``` *** ## `textInputAutocapitalization` Sets how the text input system should automatically capitalize text. ### Type ```ts textInputAutocapitalization?: TextInputAutocapitalization ``` ### Options - `"never"` – No capitalization. - `"characters"` – All letters capitalized. - `"sentences"` – First letter of each sentence capitalized. - `"words"` – First letter of each word capitalized. ### Example ```tsx ``` *** ## `submitScope` Prevents submission triggers from this view from propagating upward to parent views with submission handlers. ### Type ```ts submitScope?: boolean ``` ### Default - `false` — submission actions bubble up by default. ### Example ```tsx ``` This ensures that `onSubmit` handlers defined higher up in the view hierarchy won’t be called when this field is submitted. ## `submitLabel` Sets the label for the submit button. ### Type ```ts submitLabel?: "continue" | "return" | "send" | "go" | "search" | "join" | "done" | "next" | "route" ``` ### Example ```tsx ``` --- url: /TestFlight/guide/View Modifiers/Text View Modifiers.md --- # Text View Modifiers The following properties allow you to style and format text-based views, such as `Text` or `Label`, in ways that closely mirror SwiftUI’s built-in modifiers. By customizing these properties, you can control the font, weight, design, spacing, and other typographic attributes of the displayed text. ## Overview These properties are generally passed to text-related views like `Text` or `Label` components as attributes. For example, you can specify a font size, enable bold formatting, or add an underline with a custom color—all without manually calling multiple modifiers. ```tsx Stylish Text Here ``` In the example above, the text uses a custom font, semibold weight, italic style, a red underline, limits to two lines, and centers the text. *** ## Font Configuration ### `font` Defines the font and size to apply to the text. - **Number**: When you provide a number (e.g., `14`), it applies the system font at that size. - **Preset Font Name** (`Font` type): Use one of the built-in text styles (`"largeTitle"`, `"title"`, `"headline"`, `"subheadline"`, `"body"`, `"callout"`, `"footnote"`, `"caption"`). The system determines the size and weight based on that style. - **Object with name and size**: Apply a custom font by specifying the `name` and `size`. ```tsx System Font, Size 20 System Headline Font Custom Font ``` *** ### `fontWeight` Sets the thickness of the font’s stroke. Options range from `"ultraLight"` to `"black"`. ```tsx Bold Text ``` *** ### `fontWidth` Specifies the width variant of the font if available. Possible values include `"compressed"`, `"condensed"`, `"expanded"`, and `"standard"`. You can also use a numeric value if supported. ```tsx Condensed Width Font ``` *** ### `fontDesign` Modifies the font design. Options include `"default"`, `"monospaced"`, `"rounded"`, `"serif"`. ```tsx Rounded Font Design ``` *** ## Text Formatting ### `minScaleFactor` A number between 0 and 1 that indicates how much the text can shrink if it doesn’t fit the available space. For example, `0.5` means the text can shrink down to 50% of its original size to fit. ```tsx This text shrinks slightly if it doesn't fit. ``` *** ### `bold` Applies a bold font weight if `true`. ```tsx This text is bold ``` *** ### `baselineOffset` Adjusts the text’s vertical position relative to its baseline. Positive values move the text up, negative values move it down. ```tsx Text shifted up ``` *** ### `kerning` Controls the spacing between characters. A positive value increases spacing; a negative value decreases it. ```tsx Extra spaced text ``` *** ### `italic` Applies an italic style if `true`. ```tsx Italic text ``` *** ### `monospaced` Forces all child text to use a monospaced variant, if available. ```tsx Monospaced text ``` *** ### `monospacedDigit` Uses fixed-width digits while leaving other characters as they are. This helps align numbers vertically, useful for tables or timers. ```tsx Digits aligned in monospace 1234 ``` *** ## Text Decorations ### `strikethrough` Applies a strikethrough (line through the text). You can provide a color, or an object specifying a pattern and color. - **Color only**: `strikethrough="red"` - **Object**: `strikethrough={{ pattern: 'dash', color: 'blue' }}` ```tsx Strikethrough text in gray Dotted red strikethrough ``` *** ### `underline` Applies an underline in a similar way to `strikethrough`. - **Color only**: `underline="blue"` - **Object**: `underline={{ pattern: 'dashDot', color: 'green' }}` ```tsx Underlined text in blue Dotted pink underline ``` *** ## Line & Layout Control ### `lineLimit` Specifies how many lines of text can display. You can provide: - A single number for a maximum line limit. - An object `{ min?: number; max: number; reservesSpace?: boolean }` to specify a minimum and maximum number of lines, and whether the text should reserve space for all those lines even when not used. ```tsx This text will be truncated if it doesn't fit on one line. This text can display between 2 and 4 lines, and always reserves space for 4 lines, preventing layout shifts. ``` *** ### `lineSpacing` Sets the spacing between lines, in pixels. ```tsx Line spacing set to 5 pixels ``` *** ### `multilineTextAlignment` Sets the text alignment for multi-line text: `"leading"`, `"center"`, or `"trailing"`. ```tsx This text is centered across multiple lines. ``` *** ### `truncationMode` Specifies how to truncate a line of text when it is too long to fit within the available horizontal space. #### Type ```ts type TruncationMode = "head" | "middle" | "tail" ``` #### Description Defines the position at which the text is truncated: - `"head"`: Truncates the beginning of the line, preserving the end. - `"middle"`: Truncates the middle of the line, preserving both the beginning and end. - `"tail"`: Truncates the end of the line, preserving the beginning. ```tsx This is a very long piece of text that may be truncated. ``` *** ### `allowsTightening?: boolean` Determines whether the system is allowed to reduce the spacing between characters to fit the text within a line when needed. #### Type `boolean` #### Default `false` #### Description When set to true, the system may compress the character spacing to avoid truncation and better fit the content. This is typically used to improve layout responsiveness in constrained environments. ```tsx Condensed text if necessary ``` *** ## Summary By combining these properties, you can fully control the typography of your text-based views without needing multiple wrapper components or modifiers. Whether you need a bold, italic headline font with custom kerning and underline, or a simple body font that truncates after two lines, these options cover a broad range of text styling needs. --- url: /TestFlight/guide/View Modifiers/Toast.md --- # Toast The `toast` view modifier displays a temporary notification message (toast) over the current view. It is typically used to show short feedback messages such as “Saved successfully,” “Action completed,” or “Network error.” You can show a simple text message or provide a fully custom view as the toast’s content. You can also control its duration, position, background color, text color, corner radius, and shadow style. *** ## Type Definition ```ts toast?: { duration?: number | null position?: "top" | "bottom" | "center" backgroundColor?: Color | null textColor?: Color | null cornerRadius?: number | null shadowRadius?: number | null } & ( | { message: string; content?: never } | { message?: never; content: VirtualNode } ) & ({ isPresented: boolean onChanged: (isPresented: boolean) => void } | { isPresented: Observable }) ``` *** ## Property Descriptions ### `isPresented: boolean` and `onChanged(isPresented: boolean): void` **Description**: Uses the `isPresented` and `onChanged` properties to control the visibility and behavior of the toast. **Example**: ```tsx const [showToast, setShowToast] = useState(false) toast={{ isPresented: showToast, onChanged: setShowToast, message: "Saved successfully" }} ``` *** ### `isPresented: Observable` **Description**: Uses the `isPresented` observable to control the visibility and behavior of the toast. **Example**: ```tsx const showToast = useObservable(false) toast={{ isPresented: showToast, message: "Saved successfully" }} ``` *** ### `duration?: number | null` **Description**: Specifies how long (in seconds) the toast should remain visible. Defaults to `2` seconds. **Example**: ```tsx toast={{ isPresented: showToast, onChanged: setShowToast, duration: 3, message: "Action completed" }} ``` *** ### `position?: "top" | "bottom" | "center"` **Description**: Controls where the toast appears on the screen. Available values: - `"top"` – Displays the toast at the top. - `"bottom"` – Displays the toast at the bottom (default). - `"center"` – Displays the toast in the center. **Example**: ```tsx toast={{ isPresented: showToast, onChanged: setShowToast, position: "top", message: "New message received" }} ``` *** ### `backgroundColor?: Color | null` **Description**: Sets the background color of the toast. **Example**: ```tsx toast={{ isPresented: showToast, onChanged: setShowToast, backgroundColor: "blue", message: "Upload successful" }} ``` *** ### `textColor?: Color | null` **Description**: Sets the text color of the toast message. **Example**: ```tsx toast={{ isPresented: showToast, onChanged: setShowToast, textColor: "white", message: "Download failed" }} ``` *** ### `cornerRadius?: number | null` **Description**: Sets the corner radius of the toast. Defaults to `16`. **Example**: ```tsx toast={{ isPresented: showToast, onChanged: setShowToast, cornerRadius: 8, message: "Item added" }} ``` *** ### `shadowRadius?: number | null` **Description**: Sets the blur radius of the toast’s shadow. Defaults to `4`. **Example**: ```tsx toast={{ isPresented: showToast, onChanged: setShowToast, shadowRadius: 6, message: "Success" }} ``` *** ## Displaying a Simple Message **Example**: ```tsx function View() { const [showToast, setShowToast] = useState(false) return ( ``` ### Button Executing an AppIntent ```tsx