Latest Publications & Patents on Large Language Models (LLM)
This is our latest selection of worldwide publications and patents in english on Large Language Models (LLM), between many scientific online journals, classified and focused on large language model, LLM, generative pre-trained transformer, pre-training, transformer architecture, gradient descent, GPT, tokenization, generative model, self-attention mechanism, masked language model and MLM.
Tip: further to this selection on Large Language Models (LLM), you can search and filter our:
* free publications search tool * by author, topic, keywords, date or journal.
* free patents search tool * for patents in english from the European Patent Office.
A Multilayer Perceptron Feedforward Neural Network and Particle Swarm Optimization Algorithm for Optimizing Biogas Production
Published on 2025-02-19 by Arief Abdurrakhman, Lilik Sutiarso, Makhmudun Ainuri, Mirwan Ushada, Md Parvez Islam @MDPI
Abstract: Efficient biogas production significantly impacts greenhouse gas (GHG) emissions and carbon sequestration by reducing emissions and enhancing carbon storage. Nonetheless, the consistency and optimization of biogas production are hindered by fluctuations in key input variables, namely, pH, moisture content, organic loading rate (OLR), and temperature, which significantly impact the quality of agricultural waste biomass and biogas production. Any fluctuations in these variables can affect biogas p[...]
Our summary: Optimization of biogas production using machine learning techniques for agricultural waste biomass inputs. Maximum biogas production achieved with optimized pH, humidity, OLR, and temperature values. Accurate prediction of biogas production using MLP and PSO algorithms.
Multilayer Perceptron, Feedforward Neural Network, Particle Swarm Optimization, Biogas Production
Publication
CNN–Transformer Hybrid Architecture for Underwater Sonar Image Segmentation
Published on 2025-02-19 by Juan Lei, Huigang Wang, Zelin Lei, Jiayuan Li, Shaowei Rong @MDPI
Abstract: The salient object detection (SOD) of forward-looking sonar images plays a crucial role in underwater detection and rescue tasks. However, the existing SOD algorithms find it difficult to effectively extract salient features and spatial structure information from images with scarce semantic information, uneven intensity distribution, and high noise. Convolutional neural networks (CNNs) have strong local feature extraction capabilities, but they are easily constrained by the receptive field and l[...]
Our summary: Hybrid architecture combining CNN, Transformer, and Mamba for underwater sonar image segmentation, addressing challenges of salient object detection in images with scarce semantic information, uneven intensity distribution, and high noise. Achieves outstanding competitiveness among state-of-the-art methods with MAE and E values of 0.04 and 0.973.
Transformer, CNN, underwater sonar, image segmentation
Publication
Hierarchical Vision–Language Pre-Training with Freezing Strategy for Multi-Level Semantic Alignment
Published on 2025-02-19 by Huiming Xie, Yang Qin, Shuxue Ding @MDPI
Abstract: Vision–language pre-training (VLP) faces challenges in aligning hierarchical textual semantics (words/phrases/sentences) with multi-scale visual features (objects/relations/global context). We propose a hierarchical VLP model (HieVLP) that addresses such challenges through semantic decomposition and progressive alignment. Textually, a semantic parser deconstructs captions into word-, phrase-, and sentence-level components, which are encoded via hierarchical BERT layers. Visually, a[...]
Our summary: Hierarchical VLP model addresses challenges through semantic decomposition and progressive alignment, outperforming hierarchical baselines across various tasks, boosting accuracy in reasoning tasks.
Hierarchical Vision-Language, Pre-Training, Multi-Level, Semantic Alignment
Publication
Remote Sensing Large Vision Language Models Dataset and Evaluation Benchmark
Published on 2025-02-19 by Haodong Li, Xiaofeng Zhang, Haicheng Qu @MDPI
Abstract: With the rapid development of large visual language models (LVLMs) and multimodal large language models (MLLMs), these models have demonstrated strong performance in various multimodal tasks. However, alleviating the generation of hallucinations remains a key challenge in LVLMs research. For remote sensing LVLMs, there are problems such as low quality, small number and unreliable datasets and evaluation methods. Therefore, when applied to remote sensing tasks, they are prone to hallucinations, r[...]
Our summary: With the rapid development of large visual language models (LVLMs) and multimodal large language models (MLLMs), these models have demonstrated strong performance in various multimodal tasks. Alleviating the generation of hallucinations remains a key challenge in LVLMs research. A more reliable and effective instruction set production process for remote sensing LVLMs is proposed to address issues related to low quality, small number, and unreliable datasets and evaluation methods.
Remote Sensing, Large Vision Language Models, Dataset, Evaluation Benchmark
Publication
Transforming Applicant Tracking Systems with Intelligent Resume Embeddings for Precise Candidate Matching
Published on 2025-02-18 by Ravi Varma Kumar Bevara, Nishith Reddy Mannuru, Sai Pranathi Karedla, Brady Lund, Ting Xiao, Harshitha Pasem, Sri Chandra Dronavalli, Siddhanth Rupeshkumar @MDPI
Abstract: Conventional Applicant Tracking Systems (ATSs) encounter considerable constraints in accurately aligning resumes with job descriptions (JD), especially in handling unstructured data and intricate qualifications. We provide Resume2Vec, an innovative method that utilizes transformer-based deep learning models, including encoders (BERT, RoBERTa, and DistilBERT) and decoders (GPT, Gemini, and Llama), to create embeddings for resumes and job descriptions, employing cosine similarity for evaluation. O[...]
Our summary: Transforming Applicant Tracking Systems with innovative Resume2Vec method utilizing transformer-based deep learning models for precise candidate matching, outperforming conventional ATS systems in various professional fields, demonstrating promise for enhancing recruiting technology.
Intelligent Resume Embeddings, Transformer-based deep learning models, Cosine similarity, Candidate Matching
Publication
Improved Hadamard Decomposition and Its Application in Data Compression in New-Type Power Systems
Published on 2025-02-18 by Zhi Ding, Tianyao Ji, Mengshi Li @MDPI
Abstract: The proliferation of renewable energy sources, flexible loads, and advanced measurement devices in new-type power systems has led to an unprecedented surge in power signal data, posing significant challenges for data management and analysis. This paper presents an improved Hadamard decomposition framework for efficient power signal compression, specifically targeting voltage and current signals which constitute foundational measurements in power systems. First, we establish theoretical guarantee[...]
Our summary: Improved Hadamard Decomposition framework for efficient power signal compression in new-type power systems, theoretical guarantees for decomposition uniqueness, enhanced gradient descent algorithm for superior convergence performance.
Hadamard Decomposition, Data Compression, Power Systems, Orthogonality
Publication
System and method for implementing an advisory assistant to a generative artifical intelligence tool
Patent published on the 2025-02-13 in WO under Ref WO2025034984 by KPMG LLP [US] (Mcclung Joshua [us], Atkins Ryan [us], Bilal Mohammad [us], Hardy Bryan [us], Keller David [us])
Abstract: The invention relates to computer-implemented systems and methods that implement an innovative generative Al service based on proprietary expertise and industry knowledge. The generative Al service provides unique autonomous features, such as combining separate and distinct LLM responses and prompts to create unique results. Other autonomous features may include an ability to handle scaling and auto deployment of models and rerouting requests autonomously to ensure user load is balanced across t[...]
Our summary: System and method for implementing an advisory assistant to a generative artificial intelligence tool. Unique autonomous features include combining separate LLM responses and prompts, handling scaling and auto deployment of models, and deploying new Production instances based on system criteria or user request.
artificial intelligence, advisory assistant, generative AI, autonomous features
Patent
An implementation method for a lightweight and rapid networking mechanism based on beaconless forwarding technology
Patent published on the 2024-12-27 in LU under Ref LU506366 by YANGTZE DELTA REGION INSTITUTE QUZHOU UNIV OF ELECTRONIC SCIENCE AND TECHNOLOGY OF CHINA [CN] (Liu Qiang [cn])
Abstract: This invention discloses a method for realizing a lightweight and rapid networking mechanism based on beaconless forwarding technology. It designs a lightweight and rapid networking mechanism based on beaconless forwarding technology, initiating and completing the initial network formation through a ground station. Then, mobile nodes not yet part of the network request to join the network in a hierarchical manner, achieving network formation. This invention's method, tailored for high dynamic in[...]
Our summary: An implementation method for a lightweight and rapid networking mechanism based on beaconless forwarding technology. It designs a method for network formation through a ground station, allowing mobile nodes to join in a hierarchical manner. This invention improves networking time and reduces complexity compared to similar protocols.
beaconless forwarding technology, lightweight networking mechanism, rapid networking, network formation
Patent
How useful was this post?
Click on a star to rate it!
Average rating 4 / 5. Vote count: 1
No votes so far! Be the first to rate this post.
We are sorry that this post was not useful for you!
Let us improve this post!
Tell us how we can improve this post?