Abstract: Mobile robots are no longer a vision but have become reality in many laboratories. However, currently available systems offer limited functionalities and cannot be adapted to new tasks flexibly. Specializing in transporting MTPs, these robots often require automation-friendly devices for interaction, with device localization facilitated by fiducial markers manually added by humans. Device control is usually executed by an overarching laboratory software. However, with laboratories processing a wide range of sample vessels or utilizing manual (i.e. not digitalized) devices, there is a need for flexible mobile robots that can autonomously adapt to different environments. The Fraunhofer Institute for Manufacturing Engineering and Automation (IPA) is currently developing the control software to enable such a robotic system. Flexible robotic interaction with non-standardized objects in dynamic laboratory environments raises complex challenges. Especially the identification, localization and retrieval of relevant objects from any position in chaotic scenes is difficult for technical systems. This issue can be tackled with modern robotic vision and visual AI approaches. Our works, especially our large-scale dataset of retail objects (analogous to consumable containers) and the localization of these objects in complex shelf scenes based on it provide a strong base for object recognition algorithms for inventory handling robots and indicate the practical usability of the approaches. By transferring the methods to the laboratory domain, flexible assistance at the facility becomes feasible. Complicating the issue of object localization further is the omnipresence of transparent or fully metallic objects, which are impossible to perceive by current depth sensors. Due to their mostly light-emitting core principles, these objects are practically invisible for them. Therefore, the creation of depth recognition capable of accurately localizing transparent objects or finding another mean of localizing them is needed. With our work of localizing metallic handles (e.g. door handles), we show that shortcomings of depth sensors can be overcome in modern vision applications. In addition to reliably handling various objects, there are further requirements such a robot needs to fulfill. In order to work autonomously, the robot needs to be able to interact with manual doors, cabinets, drawers and manual controls (e.g. buttons), as it is either expensive or impossible to subsequently automate them. The main challenges here lie in the precise motions required to operate handles or latches and the force control. Again, also the localization of metallic objects (i.e. the handles) and the movement prediction of the objects during interaction is challenging. Utilizing our handle localization, we demonstrate robotic opening of both swing and sliding doors. Using ahead-of-time motion planners coupled with online force control, the robot can gently operate these obstacles in laboratories, showcasing how combinations of vision techniques with motion planners and force control can enable robots to handle real-world, everyday tasks.