Sport Livestreams für Fußball Bundesliga, DFB-Pokal, Champions League, Europa League, NFL, NBA & Co.
Jetzt neu und kostenlos: Sport Live bei radio.de. Egal ob 1. oder 2. deutsche Fußball Bundesliga, DFB-Pokal, UEFA Fußball Europameisterschaft, UEFA Champions League, UEFA Europa League, Premier League, NFL, NBA oder die MLB - seid live dabei mit radio.de.
In this episode, Jonathan Stephens and Jared Heinly delve into the intricacies of COLMAP, a powerful tool for 3D reconstruction from images. They discuss the workflow of COLMAP, including feature extraction, correspondence search, incremental reconstruction, and the importance of camera models. The conversation also covers advanced topics like geometric verification, bundle adjustment, and the newer GLOMAP method, which offers a faster alternative to traditional reconstruction techniques. Listeners are encouraged to experiment with COLMAP and learn through hands-on experience.This episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services: https://www.everypoint.io
--------
57:02
Tips and Tricks for 3D Reconstruction in Different Environments
In this episode, we discuss practical tips and challenges in 3D reconstruction from images, focusing on various environments such as urban, indoor, and outdoor settings. We explore issues like repetitive structures, lighting conditions, and the impact of reflections and shadows on reconstruction quality. The conversation also touches on the importance of camera motion, lens distortion, and the role of machine learning in enhancing reconstruction processes. Listeners gain insights into optimizing their 3D capture techniques for better results.Key TakeawaysRepetitive structures can confuse computer vision algorithms.Lighting conditions greatly affect image quality and reconstruction accuracy.Wide-angle lenses can help capture more unique features.Indoor environments present unique challenges like textureless walls.Aerial imaging requires careful management of lens distortion.Understanding the application context is crucial for effective 3D reconstruction.Camera motion should be varied to avoid distortion and drift.Planning captures based on goals can lead to better results.This episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services. Learn more at https://www.everypoint.io
--------
1:21:23
Exploring Depth Maps in Computer Vision
In this episode of Computer Vision Decoded, Jonathan Stephens and Jared Heinly explore the concept of depth maps in computer vision. They discuss the basics of depth and depth maps, their applications in smartphones, and the various types of depth maps. The conversation delves into the role of depth maps in photogrammetry and 3D reconstruction, as well as future trends in depth sensing and machine learning. The episode highlights the importance of depth maps in enhancing photography, gaming, and autonomous systems.Key Takeaways:Depth maps represent how far away objects are from a sensor.Smartphones use depth maps for features like portrait mode.There are multiple types of depth maps, including absolute and relative.Depth maps are essential in photogrammetry for creating 3D models.Machine learning is increasingly used for depth estimation.Depth maps can be generated from various sensors, including LiDAR.The resolution and baseline of cameras affect depth perception.Depth maps are used in gaming for rendering and performance optimization.Sensor fusion combines data from multiple sources for better accuracy.The future of depth sensing will likely involve more machine learning applications.Episode Chapters00:00 Introduction to Depth Maps00:13 Understanding Depth in Computer Vision06:52 Applications of Depth Maps in Photography07:53 Types of Depth Maps Created by Smartphones08:31 Depth Measurement Techniques16:00 Machine Learning and Depth Estimation19:18 Absolute vs Relative Depth Maps23:14 Disparity Maps and Depth Ordering26:53 Depth Maps in Graphics and Gaming31:24 Depth Maps in Photogrammetry34:12 Utilizing Depth Maps in 3D Reconstruction37:51 Sensor Fusion and SLAM Technologies41:31 Future Trends in Depth Sensing46:37 Innovations in Computational PhotographyThis episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services. Learn more at https://www.everypoint.io
--------
57:31
What's New in 2025 for Computer Vision?
After an 18 month hiatus, we are back! In this episode of Computer Vision Decoded, hosts Jonathan Stephens and Jared Heinly discuss the latest advancements in computer vision technology, personal updates, and insights from the industry. They explore topics such as real-time 3D reconstruction, computer vision research, SLAM, event cameras, and the impact of generative AI on robotics. The conversation highlights the importance of merging traditional techniques with modern machine learning approaches to solve real-world problems effectively.Chapters00:00 Intro & Personal Updates04:36 Real-Time 3D Reconstruction on iPhones09:40 Advancements in SfM14:56 Event Cameras17:39 Neural Networks in 3D Reconstruction26:30 SLAM and Machine Learning Innovation29:48 Applications of SLAM in Robotics34:19 NVIDIA's Cosmos and Physical AI40:18 Generative AI for Real-World Applications43:50 The Future of Gaussian Splatting and 3D ReconstructionThis episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services: https://www.everypoint.io
--------
50:03
A Computer Vision Scientist Reacts to the iPhone 15 Announcement
In this episode of Computer Vision Decoded, we are going to dive into our in-house computer vision expert's reaction to the iPhone 15 and iPhone 15 Pro announcement.We dive into the camera upgrades, decode what a quad sensor means, and even talk about the importance of depth maps.Episode timeline:00:00 Intro02:59 iPhone 15 Overview05:15 iPhone 15 Main Camera07:20 Quad Pixel Sensor Explained15:45 Depth Maps Explained22:57 iPhone 15 Pro Overview27:01 iPhone 15 Pro Cameras32:20 Spatial Video36:00 A17 Pro ChipsetThis episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services: https://www.everypoint.io
A tidal wave of computer vision innovation is quickly having an impact on everyone's lives, but not everyone has the time to sit down and read through a bunch of news articles and learn what it means for them. In Computer Vision Decoded, we sit down with Jared Heinly, the Chief Scientist at EveryPoint, to discuss topics in today’s quickly evolving world of computer vision and decode what they mean for you. If you want to be sure you understand everything happening in the world of computer vision, don't miss an episode!