Skip to content

[Components] speak_ai - new components #16256

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 15, 2025
Merged

Conversation

jcortes
Copy link
Collaborator

@jcortes jcortes commented Apr 10, 2025

WHY

Resolves #16200

Summary by CodeRabbit

  • New Features

    • Introduced new capabilities for text analysis, transcription retrieval, and media upload using our AI engine.
    • Added event sources that trigger workflows upon new media creation and text analysis, enabling seamless process automation.
    • Enhanced the application with additional properties and methods for managing folders, media, and insights.
  • Chores

    • Updated the component version and added new dependency support.
    • Introduced improved API constants for more consistent integration.

@jcortes jcortes self-assigned this Apr 10, 2025
Copy link

vercel bot commented Apr 10, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

3 Skipped Deployments
Name Status Preview Comments Updated (UTC)
docs-v2 ⬜️ Ignored (Inspect) Apr 10, 2025 9:50pm
pipedream-docs ⬜️ Ignored (Inspect) Apr 10, 2025 9:50pm
pipedream-docs-redirect-do-not-edit ⬜️ Ignored (Inspect) Apr 10, 2025 9:50pm

Copy link
Contributor

coderabbitai bot commented Apr 10, 2025

Walkthrough

This pull request introduces multiple new modules and updates for the Speak AI component. New actions are added for text analysis, transcription retrieval, and media upload. Additional modules provide API constant definitions, event constants, and webhook management functionality. New source modules emit events related to media creation and text analysis. The core Speak AI application is enhanced with extra property definitions and methods for folder and media management, including API interactions. The package version is updated, and a new dependency is added to support these features.

Changes

File(s) Change Summary
components/speak_ai/actions/{analyze-text, get-transcription, upload-media}/*.mjs Added new action modules for analyzing text via NLP, retrieving transcriptions, and uploading media files.
components/speak_ai/common/constants.mjs Introduced API-related constants (BASE_URL, VERSION_PATH, DEFAULT_LIMIT, WEBHOOK_ID).
components/speak_ai/package.json Updated version from "0.0.1" to "0.1.0" and added dependency on @pipedream/platform.
components/speak_ai/sources/common/{events, webhook}.mjs Added modules for standardized event constants and webhook management (with activate/deactivate hooks).
components/speak_ai/sources/new-media-created-instant/{new-media-created-instant, test-event}.mjs Created a source module and corresponding test event for emitting new media creation events.
components/speak_ai/sources/new-text-analyzed-instant/{new-text-analyzed-instant, test-event}.mjs Created a source module and corresponding test event for emitting text analysis events.
components/speak_ai/speak_ai.app.mjs Extended the app with new properties (folderId, mediaType, mediaId) and methods for API requests and resource management.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Action as "Analyze Text Action"
    participant App as "speak_ai.app"
    participant API as "Speak AI API"

    User->>Action: Trigger analyze-text with mediaId
    Action->>App: Call getTextInsight({ mediaId, ...args })
    App->>API: HTTP request to retrieve text insight
    API-->>App: Return text insight response
    App-->>Action: Return response with summary message
    Action-->>User: Output analysis result
Loading
sequenceDiagram
    participant Source as "New Media/Text Source"
    participant Webhook as "Webhook Module"
    participant DB as "Database Service"

    Source->>Webhook: Process incoming event resource
    Webhook->>DB: Retrieve/store webhook ID data
    Webhook-->>Source: Return processed event with metadata
Loading

Assessment against linked issues

Objective (from #16200) Addressed Explanation
Upload media action: Enable uploading media for transcription workflows [#16200]
Get transcription action: Retrieve transcription data for processed media [#16200]
Analyze text action: Analyze text for insights using Speak AI's NLP engine [#16200]
Webhook sources for media upload & transcription events: Emit appropriate events on new media/transcription [#16200] The sources implemented (media-created and text-analyzed) do not exactly match the expected names in the issue.

Suggested labels

ai-assisted

Suggested reviewers

  • michelle0927

Poem

I'm a little rabbit with ears so bright,
Hop-scotching through code both day and night,
With actions and events that make carrots dance,
In fields of code, I leap at every chance,
Celebrating changes with a hop and a glance! 🥕🐇

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 ESLint

If the error stems from missing dependencies, add them to the package.json file. For unrecoverable errors (e.g., due to private dependencies), disable the tool in the CodeRabbit configuration.

components/speak_ai/actions/analyze-text/analyze-text.mjs

Oops! Something went wrong! :(

ESLint: 8.57.1

Error [ERR_MODULE_NOT_FOUND]: Cannot find package 'jsonc-eslint-parser' imported from /eslint.config.mjs
at packageResolve (node:internal/modules/esm/resolve:839:9)
at moduleResolve (node:internal/modules/esm/resolve:908:18)
at defaultResolve (node:internal/modules/esm/resolve:1038:11)
at ModuleLoader.defaultResolve (node:internal/modules/esm/loader:557:12)
at ModuleLoader.resolve (node:internal/modules/esm/loader:525:25)
at ModuleLoader.getModuleJob (node:internal/modules/esm/loader:246:38)
at ModuleJob._link (node:internal/modules/esm/module_job:126:49)

components/speak_ai/actions/upload-media/upload-media.mjs

Oops! Something went wrong! :(

ESLint: 8.57.1

Error [ERR_MODULE_NOT_FOUND]: Cannot find package 'jsonc-eslint-parser' imported from /eslint.config.mjs
at packageResolve (node:internal/modules/esm/resolve:839:9)
at moduleResolve (node:internal/modules/esm/resolve:908:18)
at defaultResolve (node:internal/modules/esm/resolve:1038:11)
at ModuleLoader.defaultResolve (node:internal/modules/esm/loader:557:12)
at ModuleLoader.resolve (node:internal/modules/esm/loader:525:25)
at ModuleLoader.getModuleJob (node:internal/modules/esm/loader:246:38)
at ModuleJob._link (node:internal/modules/esm/module_job:126:49)

components/speak_ai/actions/get-transcription/get-transcription.mjs

Oops! Something went wrong! :(

ESLint: 8.57.1

Error [ERR_MODULE_NOT_FOUND]: Cannot find package 'jsonc-eslint-parser' imported from /eslint.config.mjs
at packageResolve (node:internal/modules/esm/resolve:839:9)
at moduleResolve (node:internal/modules/esm/resolve:908:18)
at defaultResolve (node:internal/modules/esm/resolve:1038:11)
at ModuleLoader.defaultResolve (node:internal/modules/esm/loader:557:12)
at ModuleLoader.resolve (node:internal/modules/esm/loader:525:25)
at ModuleLoader.getModuleJob (node:internal/modules/esm/loader:246:38)
at ModuleJob._link (node:internal/modules/esm/module_job:126:49)

  • 8 others
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai plan to trigger planning for file edits and PR creation.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (10)
components/speak_ai/actions/upload-media/upload-media.mjs (3)

7-7: Version inconsistency with package.json.

The component version is set to "0.0.1" while the package.json version is "0.1.0". Consider aligning these versions for consistency.

-  version: "0.0.1",
+  version: "0.1.0",

16-20: Consider adding URL validation.

The URL property doesn't have validation to ensure it's a valid URL format. Consider adding validation to prevent errors when making API requests.

   url: {
     type: "string",
     label: "URL",
     description: "Public URL or AWS signed URL",
+    validate: (value) => {
+      try {
+        new URL(value);
+        return true;
+      } catch (error) {
+        return "Please enter a valid URL";
+      }
+    },
   },

54-81: Add explicit error handling for better user experience.

The run method would benefit from additional error handling to provide clearer feedback to users when API requests fail.

 async run({ $ }) {
   const {
     uploadMedia,
     name,
     url,
     mediaType,
     folderId,
     description,
     tags,
   } = this;

-  const response = await uploadMedia({
-    $,
-    data: {
-      name,
-      url,
-      mediaType,
-      folderId,
-      description,
-      tags: Array.isArray(tags)
-        ? tags.join(",")
-        : tags,
-    },
-  });
+  try {
+    const response = await uploadMedia({
+      $,
+      data: {
+        name,
+        url,
+        mediaType,
+        folderId,
+        description,
+        tags: Array.isArray(tags)
+          ? tags.join(",")
+          : tags,
+      },
+    });
+    
+    $.export("$summary", `Successfully uploaded media with ID \`${response.data.mediaId}\`.`);
+    return response;
+  } catch (error) {
+    $.export("$summary", `Failed to upload media: ${error.message}`);
+    throw error;
+  }
-
-  $.export("$summary", `Successfully uploaded media with ID \`${response.data.mediaId}\`.`);
-  return response;
 },
components/speak_ai/sources/new-text-analyzed-instant/new-text-analyzed-instant.mjs (1)

20-24: Consider passing the context object to API call

The getTextInsight method call doesn't include the context object $ which is different from how it's used in the analyze-text action. Consider adding it for consistency and to enable proper tracking/logging.

- const { data } = await this.app.getTextInsight({
+ const { data } = await this.app.getTextInsight({
+   $: this,
    mediaId: resource.mediaId,
  });
components/speak_ai/sources/common/webhook.mjs (1)

67-82: Consider error handling in webhook API methods

The webhook creation and deletion methods don't include error handling. Consider adding try-catch blocks to handle potential API errors gracefully.

  createWebhook(args = {}) {
+   try {
      return this.app.post({
        debug: true,
        path: "/webhook",
        ...args,
      });
+   } catch (error) {
+     console.error("Error creating webhook:", error);
+     throw error;
+   }
  },
  deleteWebhook({
    webhookId, ...args
  } = {}) {
+   try {
      return this.app.delete({
        debug: true,
        path: `/webhook/${webhookId}`,
        ...args,
      });
+   } catch (error) {
+     console.error("Error deleting webhook:", error);
+     throw error;
+   }
  },
components/speak_ai/sources/new-media-created-instant/test-event.mjs (1)

7-355: Future-dated timestamps should be replaced with realistic values

The timestamps in the test event (lines 355, 360) are dated 2025, which is in the future.

Replace future dates with realistic values to avoid confusion:

-    "createdAt": "2025-04-10T21:38:29.310Z",
+    "createdAt": "2023-04-10T21:38:29.310Z",
     // other lines...
-    "updatedAt": "2025-04-10T21:38:29.465Z",
-    "originalCreatedAt": "2025-04-10T21:38:29.465Z",
+    "updatedAt": "2023-04-10T21:38:29.465Z",
+    "originalCreatedAt": "2023-04-10T21:38:29.465Z",
components/speak_ai/sources/new-text-analyzed-instant/test-event.mjs (2)

1-87: Test event structure is well-defined but contains future-dated timestamps

The test event structure for text analysis is comprehensive with sentiment analysis and keyword data.

Replace future dates with realistic values to avoid confusion:

-    "createdAt": "2025-04-10T21:42:28.042Z",
+    "createdAt": "2023-04-10T21:42:28.042Z",
     // other lines...
-    "updatedAt": "2025-04-10T21:43:31.721Z",
-    "originalCreatedAt": "2025-04-10T21:42:28.190Z"
+    "updatedAt": "2023-04-10T21:43:31.721Z",
+    "originalCreatedAt": "2023-04-10T21:42:28.190Z"

7-8: Inconsistent naming format

The name format includes a date string that should be consistent with the timestamp format.

Consider updating the name to match the ISO date format used in timestamps or use a more generic name for test data:

-    "name": "New Note - 10-04-2025 16:42:27",
+    "name": "New Note - 2023-04-10 16:42:27",
components/speak_ai/speak_ai.app.mjs (2)

110-117: Consider adding parameter validation in getInsight

The getInsight method should validate the required mediaId parameter.

Add validation for the required parameter:

   getInsight({
     mediaId, ...args
   } = {}) {
+    if (!mediaId) {
+      throw new Error("mediaId is required for getInsight method");
+    }
     return this._makeRequest({
       path: `/media/insight/${mediaId}`,
       ...args,
     });
   },

118-125: Consider adding parameter validation in getTextInsight

The getTextInsight method should validate the required mediaId parameter.

Add validation for the required parameter:

   getTextInsight({
     mediaId, ...args
   } = {}) {
+    if (!mediaId) {
+      throw new Error("mediaId is required for getTextInsight method");
+    }
     return this._makeRequest({
       path: `/text/insight/${mediaId}`,
       ...args,
     });
   },
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ef0e890 and c7225de.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (12)
  • components/speak_ai/actions/analyze-text/analyze-text.mjs (1 hunks)
  • components/speak_ai/actions/get-transcription/get-transcription.mjs (1 hunks)
  • components/speak_ai/actions/upload-media/upload-media.mjs (1 hunks)
  • components/speak_ai/common/constants.mjs (1 hunks)
  • components/speak_ai/package.json (2 hunks)
  • components/speak_ai/sources/common/events.mjs (1 hunks)
  • components/speak_ai/sources/common/webhook.mjs (1 hunks)
  • components/speak_ai/sources/new-media-created-instant/new-media-created-instant.mjs (1 hunks)
  • components/speak_ai/sources/new-media-created-instant/test-event.mjs (1 hunks)
  • components/speak_ai/sources/new-text-analyzed-instant/new-text-analyzed-instant.mjs (1 hunks)
  • components/speak_ai/sources/new-text-analyzed-instant/test-event.mjs (1 hunks)
  • components/speak_ai/speak_ai.app.mjs (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: pnpm publish
  • GitHub Check: Verify TypeScript components
  • GitHub Check: Publish TypeScript components
🔇 Additional comments (20)
components/speak_ai/sources/common/events.mjs (1)

1-18: Well-structured event constants with clear naming convention.

The event constants follow a consistent category.action pattern which makes them easy to understand and maintain. The comprehensive set of events covers all the necessary aspects of Speak AI's functionality including media processing, text analysis, embed recorders, meeting assistance, chat, and CSV handling.

components/speak_ai/common/constants.mjs (1)

1-13: Clean implementation of API configuration constants.

The constants are well-organized and provide a central place for API configuration values. This approach ensures consistency across the integration and makes future updates to the API endpoint or version simpler to manage.

components/speak_ai/package.json (2)

3-3: Version bump reflects the addition of new components.

The version increase to 0.1.0 appropriately reflects the introduction of new functionality while maintaining backward compatibility.


14-17: Appropriate dependency on Pipedream platform utilities.

The addition of the @pipedream/platform dependency is appropriate for Pipedream components, providing access to necessary platform utilities.

components/speak_ai/actions/upload-media/upload-media.mjs (2)

47-52: Well-structured API integration method.

The uploadMedia method follows good practices by accepting optional arguments and passing them through to the API call, making the method flexible and reusable.


9-45: Props are well-defined with clear descriptions.

The component's properties are well-structured with appropriate types, labels, and descriptions. Using propDefinitions from the app object for mediaType and folderId maintains consistency across the integration.

components/speak_ai/actions/analyze-text/analyze-text.mjs (4)

1-42: Well-structured action component with good documentation

The action is well-implemented and follows the standard Pipedream component pattern. It properly defines its props, dependencies, and includes appropriate documentation links.


18-26: Media type constraint is appropriate for text analysis

Good job constraining the mediaId property to only text media types, which is appropriate for a text analysis action.


33-37: API method call is properly structured

The getTextInsight method call is well-structured with proper parameter passing, including the context object $ which allows for operation tracking/logging.


39-40: Clear summary message with proper return value

The summary message is clear and informative, and the component correctly returns the full response object.

components/speak_ai/actions/get-transcription/get-transcription.mjs (3)

1-41: Well-structured transcription retrieval action

The action is well-implemented and follows the standard Pipedream component pattern with proper props, dependencies, and documentation links.


17-25: No media type constraint for transcription retrieval

Unlike the analyze-text action, this component doesn't constrain the media type in the mediaId prop definition. This is appropriate if transcriptions can be retrieved for various media types.


38-39: Specific data extraction in return value

Good job extracting only the transcript data from the response rather than returning the entire response object. This provides a cleaner output for workflows.

components/speak_ai/sources/new-text-analyzed-instant/new-text-analyzed-instant.mjs (2)

1-34: Well-structured event source component

This component properly extends the common webhook module and implements all required methods. The description and event handling are clear and appropriate.


26-32: Good metadata generation for event emission

The metadata generation includes all necessary fields (id, summary, timestamp) which will help with event tracking and deduplication.

components/speak_ai/sources/common/webhook.mjs (2)

1-87: Well-structured common webhook module

This module provides a solid foundation for webhook-based source components with proper lifecycle management and extensibility.


45-47: Good use of ConfigurationError for required method implementations

Throwing ConfigurationError for methods that must be implemented by child components ensures developers will receive clear error messages if they forget to implement required methods.

Also applies to: 54-56

components/speak_ai/sources/new-media-created-instant/new-media-created-instant.mjs (1)

1-35: Source component looks well-structured

The webhook source component extends common functionality properly and includes methods for retrieving and processing media data.

components/speak_ai/speak_ai.app.mjs (2)

7-61: Well-structured prop definitions with async options

The prop definitions for folderId, mediaType, and mediaId are well-implemented with appropriate labels, descriptions, and async options methods.


63-125: API methods implementation looks comprehensive

The methods for API interaction are well-structured, following consistent patterns and providing a clean abstraction for the Speak AI API.

@jcortes jcortes merged commit 696d83f into master Apr 15, 2025
11 checks passed
@jcortes jcortes deleted the speak-ai-new-components branch April 15, 2025 13:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Components] speak_ai
2 participants