Ir al contenido principal

Situation Awareness Cognitive Agent for Vehicle Geolocation in Tunnels

Publicado en Communications in Computer and Information Science

The integration of geolocation, big data and cognitive agents has become one of the most boosting business tools of the digital era. By definition, geolocation represents the use of different technologies in a variety of applications to help locate humans and objects. To really achieve smart services, companies also require accessing huge volumes of related information to draw meaningful conclusions. With big data, it is possible to establish connections between a wide range of associated information, and use it to improve available services or create new ones. Today, the influence of geolocation, cloud data science and involved cognitive agents impacts many application fields, which include: safety and security, marketing, beacon technology, geofencing, location-sensitive services, transportation and logistics, healthcare, urban governance, intelligent buildings and smart cities, intelligent transport systems, advanced driver assistance systems, and autonomous and semi-autonomous vehicles. To address these challenges, this paper presents a general associative-cognitive architecture framework to develop goal-oriented hybrid human-machine situation-awareness systems focused on the perception and comprehension of the elements of an environment and the estimation of their future state for decision-making activities. The architecture framework presented emphasizes the role of the associated reality as a novel cognitive agent and the involved semantic structures, to improve the capabilities of the corresponding system, processes and services. As a proof of concept, a particular situation awareness agent for geolocation of vehicles in tunnels is shown, that uses cloud data association, vision-based detection of traffic signs and landmarks, and semantic roadmaps.

Entradas populares de este blog

Multiview 3D human pose estimation using improved least-squares and LSTM networks

Publicado en Neurocomputing En este artículo se presenta un método para estimar la pose del cuerpo humano en 3D a partir de múltiples vistas 2D utilizando aprendizaje profundo. El sistema está formado una sucesión de subsistemas. Primeramente, se obtienen las poses 2D usando una red de neuronas profunda que detecta los puntos claves de un esqueleto simplificado del cuerpo en las vistas disponibles. Luego, se recosntruyen las coordenadas 3D de cada punto utilizando una propuesta original, basada en optimización de mínimos cuadrados, que analiza la calidad de las anteriores detecciones 2D para decidir si aceptarlas o no. Una vez que se dispone de las poses 3D, se estima la posición completa del cuerpo, teniendo en cuenta la historia pasada para refinarla mediante una red LSTM. En la parte experimental, el artículo ofrece unos resultados competitivos cuando se compara con trabajos representativos de la literatura. In this paper we present a deep learning based method to estimate the

Off-line handwritten signature verification using compositional synthetic generation of signatures and Siamese Neural Networks

Publicado en Neurocomputing En este trabajo, se propone el uso de Siamese Neural Networks para ayudar a resolver el problema de verificación de firmas manuscritas fuera de línea con falsificaciones aleatorias en un contexto independiente del escritor. El sistema puede ser utilizado para verificar nuevos firmantes con tan solo una firma modelo con la que comparar. Se han analizado el uso de tres tipos de datos sintéticos para aumentar la cantidad de muestras y la variabilidad necesaria para el entrenamiento de redes neurales profundas: muestras de datos aumentados del conjunto de datos GAVAB, una propuesta de generación de firma sintética compositiva a partir de primitivas de forma y el conjunto de datos sintéticos GPDSS. Los dos primeros enfoques son generados "bajo demanda" y pueden utilizarse durante la fase de formación para producir un número potencialmente infinito de firmas sintéticas. El sistema se ha probado con los conjuntos de datos GPSSynthetic, MCYT, SigCo

ASTRID - Análisis y Transcripción Semántica para Imágenes de Documentos Manuscritos

Ministerio de Ciencias, innovación y universidades Advances in the development of methods for automatically extracting and understanding the content of handwritten digitized documents will continue being an important need for our society. This project addresses three challenging computational problems related to automatic handwritten text processing of document images: (1) document layout extraction over unstructured documents, (2) continuous handwritten text recognition under unrestricted conditions and (3) offline verification of human signatures using advanced deep neural models, respectively. The proposed solutions to previous problems will be adapted to several applications presenting a socio-economic interest. In particular: the analysis and transcription of historical documents, and some demographic prediction problems based on use of handwriting (for example, recognizing the gender or handedness of a person). In this project, we will emphasize the application of developments