You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Review the [LocalStack guide](/tutorials/kafka-live-viewer-profiles/quickstart-localstack) for the default configuration for each component. Open public access to the two frontend applications and the Snowplow Collector using a HTTP load balancer so that anyone can watch the video, submit events to the pipeline, and see information on concurrent users.
61
+
Review the [LocalStack guide](/tutorials/kafka-live-viewer-profiles/quickstart-localstack) for the default configuration for each component. Open public access to the two frontend applications and the Snowplow Collector using a HTTP load balancer so that anyone can watch the video, submit events to the pipeline, and see information on concurrent users.
62
62
63
63
The applications listen for HTTP traffic on the following ports
64
64
- Web tracker front end - 3000
@@ -67,7 +67,7 @@ The applications listen for HTTP traffic on the following ports
67
67
68
68
## Next Steps
69
69
- You can implement Snowplow media tracking on any [HTML5](/docs/sources/trackers/javascript-trackers/web-tracker/tracking-events/media/html5/) or [YouTube](/docs/sources/trackers/javascript-trackers/web-tracker/tracking-events/media/youtube/) media of your choice
70
-
- Look into the output from Kafka and extend the Live Viewer to include information on the media being watched and the user.
70
+
- Look into the output from Kafka and extend the Live Viewer to include information on the media being watched and the user.
71
71
- Replace Amazon DynamoDB with an alternative to be cloud agnostic, e.g. Google Bigtable or MongoDB.
Copy file name to clipboardExpand all lines: tutorials/kafka-live-viewer-profiles/introduction.md
+11-11Lines changed: 11 additions & 11 deletions
Original file line number
Diff line number
Diff line change
@@ -4,34 +4,34 @@ title: Introduction
4
4
---
5
5
6
6
## About This Accelerator
7
-
Welcome to the **live viewer profiles** solution accelerator for video streaming!
7
+
Welcome to the **live viewer profiles** solution accelerator for video streaming!
8
8
9
-
This accelerator demonstrates how to build a real-time use case leveraging **Snowplow event data** to create live viewer profiles for a video streaming site. By combining Snowplow's streaming pipeline with **Apache Kafka**, a **Java application** and **AWS DynamoDB**, the solution processes live streaming events to visualize user interactions with video content and advertisements.
9
+
This accelerator demonstrates how to build a real-time use case leveraging **Snowplow event data** to create live viewer profiles for a video streaming site. By combining Snowplow's streaming pipeline with **Apache Kafka**, a **Java application** and **AWS DynamoDB**, the solution processes live streaming events to visualize user interactions with video content and advertisements.
10
10
11
-
On the left side of the image below we have someone watching a video. Their events are sent through a Snowplow pipeline to Kafka where they are consumed and processed by an application. The result of this processing is displayed in the right window. This shows the number of active users and their current state.
11
+
On the left side of the image below we have someone watching a video. Their events are sent through a Snowplow pipeline to Kafka where they are consumed and processed by an application. The result of this processing is displayed in the right window. This shows the number of active users and their current state.
12
12
13
13

14
14
15
15
Through this hands-on guide, you’ll learn how to build, deploy, and extend real-time, event-driven architectures using Snowplow and Kafka, enabling personalized recommendations, real-time insights, and dynamic analytics for streaming platforms. The framework is inspired by common challenges in video streaming, including tracking user behavior, ad engagement, and session activities, with the goal of maintaining up-to-date viewer profiles in DynamoDB.
16
16
17
17
This accelerator is open source and can serve as the foundation to build practical applications like real-time viewer insights, engagement analytics, ad performance tracking, and personalized recommendations. Whether you're optimizing ad placements or enhancing viewer satisfaction, this guide equips you to unlock the full potential of Snowplow event data.
18
18
19
-
Please start by reviewing how the application works in the next page on Localstack, even if you're planning to deploy with Terraform.
19
+
Please start by reviewing how the application works in the next page on Localstack, even if you're planning to deploy with Terraform.
20
20
21
21
---
22
22
23
23
## Solution Accelerator Code
24
-
[**The code for this infrastructure is available on here on GitHub.**](https://github.com/snowplow-incubator/live-viewer-profiles)
24
+
[**The code for this infrastructure is available on here on GitHub.**](https://github.com/snowplow-industry-solutions/kafka-live-viewer-profiles)
25
25
26
26
---
27
27
28
28
## Architecture
29
29
30
30
The solution comprises several interconnected components:
31
31
32
-
-**Web Tracking Application**:
32
+
-**Web Tracking Application**:
33
33
- A React application with a video to watch. Snowplow's media tracking has been configured to send events (e.g., play, pause, ad skipped) to the [Snowplow Collector](/docs/fundamentals/architecture-overview).
34
-
- Code available in [tracker-frontend](https://github.com/snowplow-incubator/live-viewer-profiles/tree/main/tracker-frontend) folder in GitHub
34
+
- Code available in [tracker-frontend](https://github.com/snowplow-industry-solutions/kafka-live-viewer-profiles/tree/main/tracker-frontend) folder in GitHub
35
35
36
36
-**Snowplow Collector**:
37
37
- Collects and forwards events via [Stream Enrich](/docs/fundamentals/architecture-overview) and Kinesis to [Snowbridge](/docs/destinations/forwarding-events/snowbridge).
@@ -41,17 +41,17 @@ The solution comprises several interconnected components:
41
41
42
42
-**Live Viewer Backend**:
43
43
- A Java application which processes events from Kafka, stores the data in DynamoDB, and generates JSON state data for the Live Viewer Frontend
44
-
- Code available in [live-viewer-backend](https://github.com/snowplow-incubator/live-viewer-profiles/tree/main/live-viewer-backend) folder in GitHub
44
+
- Code available in [live-viewer-backend](https://github.com/snowplow-industry-solutions/kafka-live-viewer-profiles/tree/main/live-viewer-backend) folder in GitHub
45
45
46
46
-**Live Viewer Frontend**:
47
47
- A HTML website which displays the state of users currently watching the video.
48
-
- Code available in [live-viewer-frontend](https://github.com/snowplow-incubator/live-viewer-profiles/tree/main/live-viewer-frontend) folder in GitHub
48
+
- Code available in [live-viewer-frontend](https://github.com/snowplow-industry-solutions/kafka-live-viewer-profiles/tree/main/live-viewer-frontend) folder in GitHub
49
49
50
-
The following diagram maps out where each component sits in the end to end communication flow.
50
+
The following diagram maps out where each component sits in the end to end communication flow.
51
51

52
52
53
53
### Components & Configuration
54
-
The following files in the [GitHub repository](https://github.com/snowplow-incubator/live-viewer-profiles) can be used to configure the project's components.
54
+
The following files in the [GitHub repository](https://github.com/snowplow-industry-solutions/kafka-live-viewer-profiles) can be used to configure the project's components.
4. Create a `.env` file based on `.env.example`. You can leave the AWS variables as placeholders when using Localstack
19
19
```bash
@@ -39,11 +39,11 @@ Details on everything that is installed can be found in [architecture](/tutorial
39
39
40
40
### Step 2: Open the Web Tracking Frontend
41
41
42
-
Visit [http://localhost:3000](http://localhost:3000) to configure the Stream Collector endpoint and start tracking events. Enter the Collector URL: `localhost:9090` and click `Create tracker`.
42
+
Visit [http://localhost:3000](http://localhost:3000) to configure the Stream Collector endpoint and start tracking events. Enter the Collector URL: `localhost:9090` and click `Create tracker`.
43
43
44
44

45
45
46
-
On the next screen, click `Custom media tracking demo`. This will bring up a video and a screen that displays information on what events are sent from the browser to the pipeline. If you want to simulate multiple users watching the video at the same time, you can open this in separate browsers.
46
+
On the next screen, click `Custom media tracking demo`. This will bring up a video and a screen that displays information on what events are sent from the browser to the pipeline. If you want to simulate multiple users watching the video at the same time, you can open this in separate browsers.
47
47
48
48

49
49
@@ -61,7 +61,7 @@ Congratulations! You have successfully run the accelerator to stream web behavio
61
61
62
62
## Next Steps
63
63
- You can implement Snowplow media tracking on any [HTML5](/docs/sources/trackers/javascript-trackers/web-tracker/tracking-events/media/html5/) or [YouTube](/docs/sources/trackers/javascript-trackers/web-tracker/tracking-events/media/youtube/) media of your choice
64
-
- Look into the output from Kafka and extend the Live Viewer to include information on the media being watched and the user.
64
+
- Look into the output from Kafka and extend the Live Viewer to include information on the media being watched and the user.
65
65
- Use our supplied Terraform in the next section to run this on AWS and make it publicly available.
66
66
67
67
## Other Things You Can Do
@@ -80,7 +80,7 @@ sudo ./lazydocker.sh
80
80
81
81
### Inspect Infrastructure with LocalStack UI
82
82
83
-
Visit the [LocalStack UI](https://app.localstack.cloud/) to inspect infrastructure components such as Kinesis and DynamoDB. Please note that a LocalStack account is required to view this.
83
+
Visit the [LocalStack UI](https://app.localstack.cloud/) to inspect infrastructure components such as Kinesis and DynamoDB. Please note that a LocalStack account is required to view this.
0 commit comments