http://repositorio.unb.br/handle/10482/52392
Fichero | Descripción | Tamaño | Formato | |
---|---|---|---|---|
página em branco.pdf | 8,42 kB | Adobe PDF | Visualizar/Abrir |
Título : | Heuristic once learning for image & text duality information processing |
Autor : | Weigang, Li Martins, Luiz Ferreira, Nikson Miranda, Christian Althoff, Lucas Pessoa, Walner Farias, Mylenè Jacobi, Ricardo Rincon, Mauricio |
metadata.dc.identifier.orcid: | https://orcid.org/0000-0003-1826-1850 https://orcid.org/0000-0003-0089-3905 |
metadata.dc.contributor.affiliation: | University of Brasilia, Department of Computer Science University of Brasilia, Department of Computer Science University of Brasilia, Department of Computer Science University of Brasilia, Department of Computer Science University of Brasilia, Department of Computer Science University of Brasilia, Department of Computer Science University of Brasilia, Department of Computer Science University of Brasilia, Department of Computer Science University of Brasilia, Department of Computer Science |
Assunto:: | Heurística Rede Neurais Convolucionais (CNNs) Visão computacional Aprendizagem profunda Imagem |
Fecha de publicación : | dic-2022 |
Editorial : | IEEE |
Citación : | WEIGANG, Li et al. Heuristic once learning for image & text duality information processing. In: 2022 IEEE Smartworld, Ubiquitous Intelligence & Computing, Scalable Computing & Communications, Digital Twin, Privacy Computing, Metaverse, Autonomous & Trusted Vehicle (SmartWorld/UIC/ScalCom/DigitalTwin/PriComp/Meta), Haikou, p. 1353-1359, 2022. DOI: 10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00195. Disponível em: https://ieeexplore.ieee.org/document/10189581. Acesso em: 06 ago. 2025. |
Abstract: | Few-shot learning is an important mechanism to minimize the need for the labeling of large amounts of data and taking advantage of transfer learning. To identify image/text input with duality property, this research proposes a “Heuristic once learning (HOL)” mechanism to investigate multi-modal input processing similar to human-like behavior. First, we create an image/text data set of big Latin letters composed of small letters and another data set composed of Arabic, Chinese and Roman numerals. Secondly, we use Convolutional Neural Networks (CNN) for pre-training the dataset of letters to get structural features. Thirdly, using the acquired knowledge, a Self-organizing Map (SOM) and Contrastive Language-Image Pretraining (CLIP) are tested separately using zero-shot learning. Siamese Networks and Vision Transformer (ViT) are also tested using one-shot learning by knowledge transfer to identify the features of unknown characters. The research results show the potential and challenges to realize HOL and make a useful attempt for the development of general agents. |
metadata.dc.description.unidade: | Instituto de Ciências Exatas (IE) Departamento de Ciência da Computação (IE CIC) |
metadata.dc.description.ppg: | Programa de Pós-Graduação em Informática |
Licença:: | Copyright © 2022, IEEE. Fonte: https://s100.copyright.com/AppDispatchServlet?publisherName=ieee&publication=proceedings&title=Heuristic+Once+Learning+for+Image+%26amp%3B+Text+Duality+Information+Processing&isbn=979-8-3503-4655-8&publicationDate=December+2022&author=Li+Weigang&ContentID=10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00195&orderBeanReset=true&startPage=1353&endPage=1359&proceedingName=2022+IEEE+Smartworld%2C+Ubiquitous+Intelligence+%26+Computing%2C+Scalable+Computing+%26+Communications%2C+Digital+Twin%2C+Privacy+Computing%2C+Metaverse%2C+Autonomous+%26+Trusted+Vehicles+%28SmartWorld%2FUIC%2FScalCom%2FDigitalTwin%2FPriComp%2FMeta%29. Acesso em: 06 ago. 2025. |
DOI: | 10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00195 |
metadata.dc.relation.publisherversion: | https://ieeexplore.ieee.org/document/10189581/figures#figures |
Aparece en las colecciones: | Artigos publicados em periódicos e afins |
Los ítems de DSpace están protegidos por copyright, con todos los derechos reservados, a menos que se indique lo contrario.