Voici notre dernière sélection de publications et de brevets mondiaux en anglais sur les réseaux neuronaux, parmi de nombreuses revues scientifiques en ligne, classées et axées sur le réseau neuronal, le neurone artificiel, l'époque, l'architecture neuronale, l'apprentissage automatique, l'apprentissage profond et la machine à vecteur de support.
Brevets : pas d'actualité brevet sur ce sujet particulier. Veuillez essayer la recherche manuelle approfondie dans la base de données des brevets dont le lien figure juste au-dessus.
Fast Prediction of Combustion Heat Release Rates for Dual-Fuel Engines Based on Neural Networks and Data Augmentation
Published on 2025-02-19 by Mingxin Wei, Xiuyun Shuai, Zexin Ma, Hongyu Liu, Qingxin Wang, Feiyang Zhao, Wenbin Yu @MDPI
Abstract: As emission regulations become increasingly stringent, diesel/natural gas dual-fuel engines are regarded as a promising solution and have attracted extensive research attention. However, their complex combustion processes pose significant challenges to traditional combustion modeling approaches. Data-driven modeling methods offer an effective way to capture the complexity of combustion processes, but their performance is critically constrained by the quantity and quality of the test data. To add[...]
Our summary: Fast prediction model based on neural networks and data augmentation for dual-fuel engines. Hybrid regression data augmentation architecture and Bayesian Neural Network for high-quality predictions. Adaptive weight allocation method for balanced accuracy distribution and enhanced generalization ability.
Neural Networks, Data Augmentation, Combustion Prediction, Dual-Fuel Engines
Automatic Configuration Search for Parameter-Efficient Fine-Tuning
Published on 2024-05-25 by Han Zhou, Xingchen Wan, Ivan Vuli?, Anna Korhonen @MIT
Abstract: Large pretrained language models are widely used in downstream NLP tasks via task-specific fine-tuning, but such procedures can be costly. Recently, Parameter-Efficient Fine-Tuning (PEFT) methods have achieved strong task performance while updating much fewer parameters than full model fine-tuning (FFT). However, it is non-trivial to make informed design choices on the PEFT configurations, such as their architecture, the number of tunable parameters, and even the layers in which the PEFT modules[...]
Our summary: Efficient fine-tuning method achieved with fewer parameters. AutoPEFT uses Bayesian optimization to discover optimal configurations. Outperforms existing methods on GLUE and SuperGLUE tasks. Pareto-optimal set balances performance and cost.
automatic configuration search, parameter-efficient fine-tuning, neural architecture search, pretrained language models, task-specific fine-tuning
Publication