watching-you
watching-you is a javascript library for building animations that watch anything on DOM 👀.
Top Related Projects
Pretrained models for TensorFlow.js
OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation
Cross-platform, customizable ML solutions for live and streaming media.
A WebGL accelerated JavaScript library for training and deploying ML models.
Quick Overview
Watching-you is a lightweight JavaScript library that detects user attention on web pages. It tracks whether the user is actively viewing the page or has switched to another tab or window, providing developers with tools to respond to user attention changes.
Pros
- Easy to implement with minimal setup required
- Provides real-time updates on user attention status
- Customizable events and callbacks for different attention states
- Lightweight and has no dependencies
Cons
- Limited browser support for some advanced features
- May not work accurately in all scenarios (e.g., multi-monitor setups)
- Potential privacy concerns for users who are unaware of attention tracking
- Documentation could be more comprehensive
Code Examples
- Basic usage:
import WatchingYou from 'watching-you';
const watcher = new WatchingYou();
watcher.on('focus', () => console.log('User is viewing the page'));
watcher.on('blur', () => console.log('User switched away from the page'));
- Custom idle time detection:
const watcher = new WatchingYou({ idleTime: 5000 });
watcher.on('idle', () => console.log('User has been inactive for 5 seconds'));
- Checking current attention state:
const watcher = new WatchingYou();
console.log(watcher.getAttentionState()); // Returns 'focus', 'blur', or 'idle'
Getting Started
To use Watching-you in your project, follow these steps:
-
Install the package:
npm install watching-you
-
Import and initialize in your JavaScript file:
import WatchingYou from 'watching-you'; const watcher = new WatchingYou(); watcher.on('focus', () => { console.log('User is viewing the page'); }); watcher.on('blur', () => { console.log('User switched away from the page'); }); watcher.on('idle', () => { console.log('User is inactive'); });
-
Customize options as needed:
const watcher = new WatchingYou({ idleTime: 10000, // Set idle time to 10 seconds captureMouseMovement: true });
Competitor Comparisons
Pros of pose-animator
- More comprehensive pose estimation, including facial landmarks
- Supports real-time animation of SVG characters
- Includes a web-based demo for easy testing and visualization
Cons of pose-animator
- Requires more computational resources due to complex pose estimation
- Limited to animating pre-designed SVG characters
- May have higher latency compared to simpler tracking methods
Code Comparison
pose-animator:
const pose = await net.estimateSinglePose(video, {
flipHorizontal: false
});
const keypoints = pose.keypoints;
updateSVGCharacter(keypoints);
watching-you:
const face = await faceapi.detectSingleFace(video, options);
if (face) {
updateEyePosition(face.landmarks.getLeftEye(), face.landmarks.getRightEye());
}
Summary
pose-animator offers more advanced pose estimation and character animation capabilities, making it suitable for complex interactive applications. However, it may require more resources and have higher latency. watching-you focuses on simpler face tracking, potentially offering better performance for basic eye-following effects. The choice between the two depends on the specific requirements of the project, balancing between feature richness and performance.
Pretrained models for TensorFlow.js
Pros of tfjs-models
- Comprehensive collection of pre-trained models for various tasks
- Backed by Google's TensorFlow team, ensuring high-quality and well-maintained code
- Extensive documentation and community support
Cons of tfjs-models
- Larger project size and complexity, potentially overwhelming for beginners
- May require more computational resources due to its extensive features
Code Comparison
watching-you:
const watchingYou = new WatchingYou({
el: document.getElementById('eyes'),
pupilSize: 0.3,
eyeSize: 100,
eyeColor: '#4d4d4d',
pupilColor: '#222222'
});
tfjs-models (PoseNet example):
const net = await posenet.load();
const pose = await net.estimateSinglePose(imageElement, {
flipHorizontal: false
});
Summary
watching-you is a lightweight library focused on creating interactive eye-following effects, while tfjs-models is a comprehensive collection of machine learning models for various tasks. watching-you is simpler and more specialized, making it easier to implement specific eye-tracking features. tfjs-models offers a broader range of capabilities but may require more setup and resources.
OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation
Pros of openpose
- More comprehensive and advanced pose estimation capabilities
- Supports multi-person pose detection
- Backed by extensive research and academic publications
Cons of openpose
- Higher computational requirements and complexity
- Steeper learning curve for implementation and customization
- Less suitable for lightweight or browser-based applications
Code comparison
watching-you:
watchingYou({
selector: '.eye',
eyeSize: 20,
pupilSize: 10,
pupilColor: '#000000'
});
openpose:
op::Wrapper opWrapper{op::ThreadManagerMode::Asynchronous};
opWrapper.start();
auto datumProcessed = opWrapper.emplaceAndPop(datum);
if (datumProcessed != nullptr)
cv::imshow("OpenPose", datumProcessed->at(0)->cvOutputData);
watching-you is a lightweight JavaScript library for creating eye-following effects, while openpose is a more complex C++ framework for full-body pose estimation. watching-you is easier to implement for simple eye-tracking effects in web applications, whereas openpose offers more advanced capabilities but requires more setup and computational resources.
Cross-platform, customizable ML solutions for live and streaming media.
Pros of MediaPipe
- Comprehensive cross-platform solution for building multimodal machine learning pipelines
- Extensive documentation and examples for various use cases
- Backed by Google, ensuring regular updates and support
Cons of MediaPipe
- Steeper learning curve due to its complexity and wide range of features
- Larger codebase and dependencies, potentially increasing project size
Code Comparison
MediaPipe (Python):
import mediapipe as mp
mp_face_detection = mp.solutions.face_detection
mp_drawing = mp.solutions.drawing_utils
with mp_face_detection.FaceDetection(min_detection_confidence=0.5) as face_detection:
results = face_detection.process(image)
Watching-You (JavaScript):
import { WatchingYou } from 'watching-you'
const wy = new WatchingYou()
wy.init()
wy.on('watch', (data) => {
console.log(data)
})
Summary
MediaPipe offers a more comprehensive solution for various machine learning tasks, including face detection, while Watching-You focuses specifically on eye-tracking. MediaPipe provides cross-platform support and extensive documentation but may be more complex to implement. Watching-You offers a simpler API for eye-tracking but has a narrower scope of functionality.
A WebGL accelerated JavaScript library for training and deploying ML models.
Pros of TensorFlow.js
- Comprehensive machine learning library with broad functionality
- Large community and extensive documentation
- Supports both browser and Node.js environments
Cons of TensorFlow.js
- Steeper learning curve for beginners
- Larger file size and potentially slower performance for simple tasks
Code Comparison
watching-you:
const watchingYou = new WatchingYou({
el: document.querySelector('.watching-you'),
pupilEl: document.querySelector('.pupil'),
});
watchingYou.init();
TensorFlow.js:
const model = tf.sequential();
model.add(tf.layers.dense({units: 1, inputShape: [1]}));
model.compile({loss: 'meanSquaredError', optimizer: 'sgd'});
const xs = tf.tensor2d([1, 2, 3, 4], [4, 1]);
const ys = tf.tensor2d([1, 3, 5, 7], [4, 1]);
model.fit(xs, ys, {epochs: 10}).then(() => {
model.predict(tf.tensor2d([5], [1, 1])).print();
});
watching-you is a lightweight library focused on creating eye-tracking effects, while TensorFlow.js is a comprehensive machine learning library. watching-you offers a simpler API for specific eye-tracking functionality, making it easier to implement for that particular use case. TensorFlow.js provides a wide range of machine learning capabilities but requires more setup and knowledge to use effectively.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Features
- Be able to watch the mouse or another DOM, or even input values, watch anything you want!
- Because it is DOM-based, it is easy to support RWD
- Supports multiple frameworks
- Zero dependency (every framework is!)
- Written in typescript
- The size of the core code is only 3kb after gzip compression
- If the element is not on the screen, it will automatically stop watching
Example
The source code can be found here
Storybook
watching-you's storybook using react, but every framework can do the same thing!
https://jj811208.github.io/watching-you/storybook
Documents
â ï¸ The API is still subject to change until version 1.0.0 is released â ï¸
Compatibility
If you use watching-you
directly without any compiler(babel), (e.g. Wordpress project using CDN import watching-you
)
Chrome | Firefox | Safari | Edge | Opera | iOS Safari/Chrome | Android Chrome | |
---|---|---|---|---|---|---|---|
Supported | 70+ | 73+ | 14.1+ | 80+ | 70+ | 14.1+ | â |
But if you use a compiler like babel and import polyfill, it can even support IE11
Some references:
https://babeljs.io/
https://github.com/vitejs/vite/tree/main/packages/plugin-legacy
Note
- When watching
input
ortextarea
, thetext-align
attribute must beleft
- Some inline elements ignore the
transform
attribute (let's sayspan
), so you have to give them thedisplay
attribute to work properly. (see: https://stackoverflow.com/questions/24961795/how-can-i-use-css3-transform-on-a-span) - You may need something like
transition: transform .1s
depending on your needs
License
Top Related Projects
Pretrained models for TensorFlow.js
OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation
Cross-platform, customizable ML solutions for live and streaming media.
A WebGL accelerated JavaScript library for training and deploying ML models.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot