Ir al contenido principal

Situation Awareness Cognitive Agent for Vehicle Geolocation in Tunnels

Publicado en Communications in Computer and Information Science

The integration of geolocation, big data and cognitive agents has become one of the most boosting business tools of the digital era. By definition, geolocation represents the use of different technologies in a variety of applications to help locate humans and objects. To really achieve smart services, companies also require accessing huge volumes of related information to draw meaningful conclusions. With big data, it is possible to establish connections between a wide range of associated information, and use it to improve available services or create new ones. Today, the influence of geolocation, cloud data science and involved cognitive agents impacts many application fields, which include: safety and security, marketing, beacon technology, geofencing, location-sensitive services, transportation and logistics, healthcare, urban governance, intelligent buildings and smart cities, intelligent transport systems, advanced driver assistance systems, and autonomous and semi-autonomous vehicles. To address these challenges, this paper presents a general associative-cognitive architecture framework to develop goal-oriented hybrid human-machine situation-awareness systems focused on the perception and comprehension of the elements of an environment and the estimation of their future state for decision-making activities. The architecture framework presented emphasizes the role of the associated reality as a novel cognitive agent and the involved semantic structures, to improve the capabilities of the corresponding system, processes and services. As a proof of concept, a particular situation awareness agent for geolocation of vehicles in tunnels is shown, that uses cloud data association, vision-based detection of traffic signs and landmarks, and semantic roadmaps.

Entradas populares de este blog

Multiview 3D human pose estimation using improved least-squares and LSTM networks

Publicado en Neurocomputing En este artículo se presenta un método para estimar la pose del cuerpo humano en 3D a partir de múltiples vistas 2D utilizando aprendizaje profundo. El sistema está formado una sucesión de subsistemas. Primeramente, se obtienen las poses 2D usando una red de neuronas profunda que detecta los puntos claves de un esqueleto simplificado del cuerpo en las vistas disponibles. Luego, se recosntruyen las coordenadas 3D de cada punto utilizando una propuesta original, basada en optimización de mínimos cuadrados, que analiza la calidad de las anteriores detecciones 2D para decidir si aceptarlas o no. Una vez que se dispone de las poses 3D, se estima la posición completa del cuerpo, teniendo en cuenta la historia pasada para refinarla mediante una red LSTM. En la parte experimental, el artículo ofrece unos resultados competitivos cuando se compara con trabajos representativos de la literatura. In this paper we present a deep learning based method to estimate the

ASTRID - Análisis y Transcripción Semántica para Imágenes de Documentos Manuscritos

Ministerio de Ciencias, innovación y universidades Advances in the development of methods for automatically extracting and understanding the content of handwritten digitized documents will continue being an important need for our society. This project addresses three challenging computational problems related to automatic handwritten text processing of document images: (1) document layout extraction over unstructured documents, (2) continuous handwritten text recognition under unrestricted conditions and (3) offline verification of human signatures using advanced deep neural models, respectively. The proposed solutions to previous problems will be adapted to several applications presenting a socio-economic interest. In particular: the analysis and transcription of historical documents, and some demographic prediction problems based on use of handwriting (for example, recognizing the gender or handedness of a person). In this project, we will emphasize the application of developments

SSD vs. YOLO for Detection of Outdoor Urban Advertising Panels under Multiple Variabilities

Publicado en Sensors    Este trabajo compara una red SSD con una red YOLO para el problema de detección del paneles de publicidad exterior en entornos urbanos reales.  La detección de paneles de publicidad en imágenes ofrece importantes aplicaciones tanto en el mundo real como en el virtual. Por ejemplo, aplicaciones como Google Street View podrían utilizarla para actualizar o personalizar la publicidad que aparece en las imagenes de las calles.  En nuestros experimentos, tanto las redes SSD como las redes YOLO han producido resultados interesantes ante diferentes tamaños de paneles, condiciones de iluminación, perspectivas de visión, oclusiones parciales, fondos complejos y múltiples paneles en cada escenas.  Debido a la dificultad de encontrar imágenes anotadas para el problema considerado, creamos nuestro propio conjunto de datos para llevar a cabo los experimentos.  La mayor fortaleza del modelo SSD fue la casi eliminación de los casos de Falsos Positivos (FP), situación que es pr