Swarm intelligence application to UAV aided IoT data acquisition deployment optimization
It is feasible and safe to use unmanned aerial vehicle (UAV) as the data collection platform of the Internet of things (IoT). In order to save the energy loss of the platform and make the UAV perform the collection work effectively, it is necessary to optimize the deployment of UAV. The objective problem is to minimize the sum of the lost energy of UAV and the loss of data transmission of Internet of things devices. The key to solving the problem is to calculate the location of the docking points and the number of docking points when the UAV is working to collect data. This paper proposes a
Neural Knapsack: A Neural Network Based Solver for the Knapsack Problem
This paper introduces a heuristic solver based on neural networks and deep learning for the knapsack problem. The solver is inspired by mechanisms and strategies used by both algorithmic solvers and humans. The neural model of the solver is based on introducing several biases in the architecture. We introduce a stored memory of vectors that holds up items representations and their relationship to the capacity of the knapsack and a module that allows the solver to access all the previous outputs it generated. The solver is trained and tested on synthetic datasets that represent a variety of
Stochastic travelling advisor problem simulation with a case study: A novel binary gaining-sharing knowledge-based optimization algorithm
This article proposes a new problem which is called the Stochastic Travelling Advisor Problem (STAP) in network optimization, and it is defined for an advisory group who wants to choose a subset of candidate workplaces comprising the most profitable route within the time limit of day working hours. A nonlinear binary mathematical model is formulated and a real application case study in the occupational health and safety field is presented. The problem has a stochastic nature in travelling and advising times since the deterministic models are not appropriate for such real-life problems. The
Nonlinear single-input single-output model-based estimation of cardiac output for normal and depressed cases
Mental depression is associated with an increased risk of cardiovascular mortality, thus provisioning generic simple nonlinear mathematical models for normal and depressed cases using only heart rate (HR) or stroke volume (SV) as a single input to produce cardiac output (CO) as a single output instead of using both HR and SV as two inputs. The proposed models could be in the future an effective tool to investigate the effect of neuroleptic medication, especially depression, and it reduces the time of processing. Seventy-four depressed cases, 74 normal peers and autoregressive considered as a
Optimum functional splits for optimizing energy consumption in V-RAn
A virtualized radio access network (V-RAN) is considered one of the key research points in the development of 5G and the interception of machine learning algorithms in the Telecom industry. Recent technological advancements in Network Function Virtualization (NFV) and Software Defined Radio (SDR) are the main blocks towards V-RAN that have enabled the virtualization of dual-site processing instead of all BBU processing as in the traditional RAN. As a result, several types of research discussed the trade-off between power and bandwidth consumption in V-RAN. Processing at remote locations
Collision Probability Computation for Road Intersections Based on Vehicle to Infrastructure Communication
In recent years, many probability models proposed to calculate the collision probability for each vehicle and those models used in collision avoidance algorithms and intersection management algorithms. In this paper, we introduce a method to calculate the collision probability of vehicles at an urban intersection. The proposed model uses the current position, speed, acceleration, and turning direction then each vehicle shares its required information to the roadside unit (RSU) via the Vehicle to Infrastructures (V2I). RSU can predict each vehicle's path in intersections by using the received
A Review of Machine learning Use-Cases in Telecommunication Industry in the 5G Era
With the development of the 5G and Internet of things (IoT) applications, which lead to an enormous amount of data, the need for efficient data-driven algorithms has become crucial. Security concerns are therefore expected to be raised using state-of-the-art information technology (IT) as data may be vulnerable to remote attacks. As a result, this paper provides a high-level overview of machine-learning use-cases for data-driven, maintaining security, or easing telecommunications operating processes. It emphasizes the importance of analyzing the role of machine learning in the
Real-Time Lane Instance Segmentation Using SegNet and Image Processing
The rising interest in assistive and autonomous driving systems throughout the past decade has led to an active research community in perception and scene interpretation problems like lane detection. Traditional lane detection methods rely on specialized, hand-tailored features which is slow and prone to scalability. Recent methods that rely on deep learning and trained on pixel-wise lane segmentation have achieved better results and are able to generalize to a broad range of road and weather conditions. However, practical algorithms must be computationally inexpensive due to limited resources
Real-Time Collision Warning System Based on Computer Vision Using Mono Camera
This paper aims to help self-driving cars and autonomous vehicles systems to merge with the road environment safely and ensure the reliability of these systems in real life. Crash avoidance is a complex system that depends on many parameters. The forward-collision warning system is simplified into four main objectives: detecting cars, depth estimation, assigning cars into lanes (lane assign) and tracking technique. The presented work targets the software approach by using YOLO (You Only Look Once), which is a deep learning object detector network to detect cars with an accuracy of up to 93%
Comparative Analysis of Various Machine Learning Techniques for Epileptic Seizures Detection and Prediction Using EEG Data
Epileptic seizures occur as a result of functional brain dysfunction and can affect the health of the patient. Prediction of epileptic seizures before the onset is beneficial for the prevention of seizures through medication. Electroencephalograms (EEG) signals are used to predict epileptic seizures using machine learning techniques and feature extractions. Nevertheless, the pre-processing of EEG signals for noise removal and extraction of features are two significant problems that have an adverse effect on both anticipation time and true positive prediction performance. Considering this, the
Pagination
- Previous page ‹‹
- Page 7
- Next page ››