Journal of Environmental Treatment Techniques
2020, Volume x, Issue x, Pages: 1390-1399
introduced for prediction based on annual and daily
data. These simulations could forecast the quality of air
with conservative reliability.
by a number for designs and a quantity for product / input
neurons, a volume with sound measurements, a type of system,
a kind for stimulation reaction was applied in a secret coating
[
10]. One concealed layer was capable of estimating several
nonlinear functions as well as levels of ins and outs [11]. By
training, in a hidden layer we can get the number, many
networks and valuing the consistent faults of the test group data
in the unseen sheet of few neurons and yield great testing errors
because of under-fitting and mathematical nuances. As
conflicting, many hidden layer neurons chief little training
fault, however high testing error, because of through
convenient and great variance [12].
2
Al-Ahmadi city
Al-Ahmadi city is located in the southeastern of
2
Kuwait (Fig. 1) with an area of 5,120 km ,apopulation
of (394,000) heavily polluted city due to rapid increase
in vehicles and industry. Such as oil refiners, causing
sufficient effect in air quality there. Kuwait Institute for
Scientific Research involved in different studies and
projects to monitor and evaluate air quality in hospitals,
factories and oil -fields in the area and it relation to
urban growth, such growth increase the demand on
different resources (electricity and water). Dust, gases
and other manufacturing pollutants resulted in a sharp
increase in a cancer among al-Ahmadi residents (Table
3
.1.2 Stimulation Function
A function is needed in network to build non-linear
connection concerning input and output factor, to be able to
attach and relate the factors [12]. In research for [7] and [9],
articles related to poor air quality were discussed confirming
which the hyperbolic sigmoid activation function seems to be
faster and efficient than logistic sigmoid activation function to
represent the nonlinearity between the hidden layer neurons
1
, Figure 2).
[13]. Build on the hyperbolic tangent feature for hidden
neuronal layers for this research. In addition, the ' individuality
feature ' was used for their specific purpose results in the entry
and output layer cells [9].
3
.1.3 Factor in Learning
Training is carried out on the multilayer neural network.
Preparation requires a custom information series comprising
input and connected production. Back propagation neural
network seems to be highly useful for training in the multi-layer
NN [14]. In the back-propagation training, η and µ are utilize
to ‘fast’ or ‘slow down’ the interchange the mistakes [15]. Its
learning scheme for back-propagation offers an "assessment"
of the heavy space route calculated by a Gradient Descent
Technique [16]. Significant lowering of the 'η' rate effects in
small changes in the weight of the synapse from one repeat to
the next and reduces the speed of training. However, its rise in
the 'η' speed increases the speed of practice of a system because
of big differences throughout neuronal measurements and
therefore creates a system when unbalanced. In back-
propagation teaching algorithm, the word 'μ' was adapted in
preventing system disturbance. The range of values 'π' and 'μ'
ranges from 0 to 1 [17] and [13].
Figure 1: Al-Ahmadi location in Kuwait
3
ANN modeling approach
The ANN designs have many benefits over old-
style semi-empirical simulations, although without
theory, it is essential to identify the information cluster
7]. This displays data about processing and is capable
a
[
of enhancing the depiction of parameters of input and
output. Using match inputs, this picture can be used to
forecast results [8]. The dual-layer ANN predicted just
about every measurable feature between results and
3
.1.4 Original Weights of the Network
Before beginning the instruction, the weights of neural
networks and free parameters are crucial. The network's main
weight and prejudice values help the learning processes to
converge rapidly. Throughout the present research, both
network distortion variables are laid to random amounts
equally distributed with the distance −2.4/Fi to + 2.4/Fi, where
Fi has been complete amount in outputs. A slightest variety of
allocation reduces the likelihood of neuron saturation of a
system to thus eliminating any mistakes happening [18].
input vectors by selecting
a suitable weight related
group and transferring any features [9]. This has
interrelated 'node ' or 'neuron' layers as shown in Fig.
3
.1 ANN Modeling Method
The method arrange procedure based on six serial stages:
i) choosing of ideal ANN-based model manner; (ii) choosing
stimulation factor; (iii) selecting the best parameters for
teaching, training rate and impulse frequency; (v) weights of
initialization and network preferences; (vi) testing as well as
analysis of the model; and (vii) model estimation.
(
3
.1.5 Practice and Monitoring
Its ‘supervised’ learning algorithm is mostly used to trained
the network. It is skillful by producing identified output and
input figures in a well-arranged method toward system [17].
Learning includes finding a number of network weights so that
the system can depict the basic models in the exercise
information. It is achieved by reducing its design mistake to all
models of input and linked result [7]. Its network ' setups ' ' in
teaching ' a coaching algorithm in ' over-training ' and ' local '
minima outcomes in mistakes of elevated design forecast [9].
3
.1.1 Ideal ANN Model Manner Choosing
Amount with variables initially equivalent to an amount for
neurons of the initial layer (i.e., variables are weather and
pollutant concentrations in the current effort). The resulting
layer includes single neuron, i.e. contaminant quantity. The
resulting layer includes single neuron, i.e. contaminant
quantity. Amount in neuronal in a secret coating is measured
1
391