American Journal of Education and Learning

Volume 2, Number 2 (2017) pp 159-179 doi 10.20448/804.2.2.159.179 | Research Articles

 

Application of Artificial Neural Networks Modeling for Evaluation of E-Learning/Training Convergence Time

Hassan Moustafa 1 , Fadhel Ben Tourkia 1 Ramadan Mohamed Ramadan 3
1 Computer Engineering Department, Al-Baha Private College of Sciences Al-Baha, (KSA)
3 Educational Psychology Department, Educational College Banha University, Egypt

ABSTRACT

This paper addresses the challenging issue of how to evaluate dynamically learning performance of E-learners' convergence time on the basis of recently adopted interdisciplinary trend. Objectively, this work investigates systematically and interprets realistically some of observed brain functions’ educational phenomena while E-learning process proceeds. Herein, ANNs modeling is adopted for realistic measurements of an E-learning performance parameter. More specifically, this parameter considers timely changes of learners' intelligence level before and during learning / training process. At any time instant, the state of synaptic connectivity pattern inside E-learner's brain supposed to be presented as timely dependent weight vector. This synaptic state expected to lead to obtaining spontaneously some learner's output (answer). Obviously, obtained responsive learner's output is a resulting action to any arbitrary external input stimulus (question). So, as the initial brain state of synaptic connectivity pattern (vector) considered as pre-intelligence measuring parameter. Actually, obtained e-learner’s answer is compatibly consistent with modified state of internal / stored experienced level of intelligence. In other words, dynamical changes of brain synaptic pattern (weight vector) modify adaptively convergence time of learning processes, so as to reach desired answer. Additionally, introduced research work is motivated by some obtained results for performance evaluation of some neural system models concerned with convergence time of learning process. Moreover, this paper considers interpretation of interrelations among some other interesting results obtained by a set of previously published educational models. The interpretational evaluation and analysis for introduced models results in some applicable studies at educational field as well as medically promising treatment of learning disabilities. Finally, some interesting remarks illustrating comparative analogy between learning performance at neural systems and cooperative learning at Ant Colony System (ACS) are presented.

Keywords:  Artificial neural network modeling, E-Learning performance evaluation, Synaptic connectivity, Ant colony system.

DOI: 10.20448/804.2.2.159.179

Citation | Hassan Moustafa; Fadhel Ben Tourkia; Ramadan Mohamed Ramadan. Application of Artificial Neural Networks Modeling for Evaluation of E-Learning/Training Convergence Time. American Journal of Education and Learning, 2(2): 159-179.

Copyright: This work is licensed under a Creative Commons Attribution 3.0 License

Funding : This study received no specific financial support.

Competing Interests: The authors declare that they have no competing interests.

History : Received: 7 March 2017/ Revised: 22 September 2017/ Accepted: 4 October 2017/Published: 19 October 2017

Publisher: Online Science Publishing

1. INTRODUCTION

The last decade of previous century(1990-2000)named as the Decade of the brain, that  after referring to WHITE HOUSE OSTP REPORT(U.S.A.) which declared in 1989 (White House, n.d). Consequently, educationalists as well as computer engineering scientists have adopted research approach associated with natural intelligence (recent computer generation),and basic brain functions (learning and memory). Additionally, this approach has been tightly related to developed trends in information technology to attain systematic analysis and performance evaluation of various learning  processes. It is worthy to note that recent evolutionary interdisciplinary trends have been adopted by educationalists incorporating Nero-physiology, psychology, and cognitive learning sciences. Herein, specifically this paper motivated by the work of Grossberg in 1988, therein the concept of natural intelligence introduced (Grossberg, 1988b). Consequently, artificial neural networks (ANNs) modeling has been adopted to investigate systematically mysteries of the most complex biological neural system(human brain). Accordingly, evolutionary interdisciplinary trends have been adopted by educationalists, neurobiologists, psychologists, as well as computer engineering researchers in order to carry out realistic investigations for some critical challenging educational issues (Ghonaimy et al., 1994; Ghonaimy et al., 1994a; Hassan et al., 2007; Swaminathan, 2007; Hassan, 2008; Hassan, 2015) Due to currently rapid development in the research field of learning sciences which represented by a growing community internationally. Many experts have been recently recognized there interest in facing educational systems' challenging phenomenal issues. That have been given via current advances in communication, and  information technology-mediated learning. Moreover, a set of very recently approach papers have been published having a special attention towards three interdisciplinary educational issues. Namely, noisy learning environment, overcrowded classrooms, and  not well qualified instructors (Hassan, 2015; Hassan and Ayoub, 2016; Hassan and Ayoub, 2016) .  

Generally, evaluation of learning performance is a challenging, interesting, and critical educational issues (Kandel, 1979; Tsien, 2000; Tsien, 2001; Douglas, 2005; Hassan, 2005) . Specifically, considering academic performance measurement of e-learning systems some interesting papers have been published as introduced at (Mustafa, 2011; Hassan, 2013; Hassan, 2014; Hassan et al., 2014; Hassan, 2016) . Educationalists  have been in need to know how neurons synapses inside the brain are interconnected together, and communication among brain regions (Swaminathan, 2007). By this information they can fully understand how the brain’s structure gives rise to perception, learning, and behavior, and consequently, they can investigate well the learning process phenomenon (Hassan, 2014; Hassan, 2016). This paper presents an investigational approach getting insight with e-learning evaluation issue adopting (ANNs) modeling. The suggested model motivated by synaptic connectivity dynamics of  neuronal pattern(s)  inside brain which equivalently called as synaptic plasticity while coincidence detection learning (Hebbian rule) is considered (D.O, 1949). The presented interdisciplinary work aims to simulate appropriately performance evaluation issue in e-learning systems with special attention to face to face tutoring (Al-Ajroush, 2004). That purpose fulfilled by adopting learner’s convergence (response) time ,(as an appropriate metric parameter) to evaluate his interaction with  e-learning course material(s) . In fact this metric learning parameter is one of learning parameters recommended for using in educational field by most of educationalists. In practice, it is measured by a learner's  elapsed time till accomplishment of a pre-assigned achievement level (learning output) (Al-Ajroush, 2004; Hassan, 2005; Hassan et al., 2014) . Thus, superior quality of evaluated e-leaning system performance could be reached via global decrement of learners' response time. Accordingly, that response time needed-to accomplish pre-assigned learners’ achievement- is a relevant indicator towards quality of any-under evaluation- learning system.  Obviously, after successful timely updating of dynamical state vector (inside e-learner's brain) pre-assigned achievement is accomplished (Tsien, 2001). Consequently, assigned learning output level is accomplished if and only if connectivity pattern dynamics (inside learner’s brain) reaches a stable convergence state,(following Hebbian learning rule).  In other words, connectivity vector pattern associated to biological neuronal network performs coincidence detection to input stimulating vector. i.e. inside a learner's brain, dynamical changes of synaptic connectivity  pattern (weight vector) modifies adaptively convergence time, so as to deliver (output desired answer). Hence, synaptic weight vector has become capable to spontaneous responding (delivering correctly coincident answer) to its environmental input vector (question) (Kandel, 1979; Fukaya, 1988; Haykin, 1999; Tsien, 2000; Tsien, 2001) . Interestingly, some innovative research work considered systematically the observed analogy between learning process concerned with  smart swarm intelligence (Ant Colony Systems).Versus learning performance at behavioral neural systems have been published at (Hassan, 2005; Hassan, 2008; Hassan, 2008; Ursula and Gerard, 2008; Hassan, 2011; Hassan, 2011; Hassan, 2015; Hassan, 2015) .

The rest of this paper is organized as follows. The next second section a review for performance evaluation techniques is presented. Selectivity criteria used in ANN models are briefly reviewed at the third section. At the fourth section, modeling for three learning phases face to face (learning under supervision), unsupervised learning (self-study), and learning by interaction with other followers (e-learners) is presented. Experimental measurement of response time and simulation results are shown at the fifth section. This section considers the effect of gain factor of ANN on the time response in addition to comparative analogy between the gain factor effect-during learning process evaluation- in neural networks systems. Versus the impact of intercommunication cooperative learning cooperative learning parameter-while solving Traveling Sale Man(TSP)Problem- in Ant Colony System (ACS). At the sixth section, some conclusions and suggestions for future work are presented. Finally, all of cited references are given at the last seventh section. By the end of this paper four Appendices are presented .The appendix A presents the Research frame work Suggested by Arab Open University (K.S.A. Branch) that is tightly related to the adopted this manuscript research direction. Additionally, the two Appendices B&C give listings for programming of the mathematical equations given at the fourth section in below  considering learning (with and without supervision paradigms) respectively. They are written according to MATLAB software -version 6- programming language. The fourth Appendix D is presenting a simplified macro level flowchart describing algorithmic steps for different number of neurons using Artificial Neural Networks modeling.

2.PERFORMANCE EVALUATION TECHNIQUES

The most widely applicable techniques considering performance evaluation of complex computer systems are presented along with analysis of statistical modeling, and simulation for some experimental results  measurement (given in the fifth section). More recently, self assessment for Blended Learning Performance has been published, Hassan (2016). Herein, all of three techniques are presented, with giving special attention to simulation using ANN modeling for learners' brain functions. Quantitative evaluation of timely updating brain function is critical for the delivery of a pre-assigned  learning  output level for a successful e-learning system. More precisely , inside a learner's brain , dynamical  changes of synaptic connectivity pattern(weight vector)modifies adaptively  convergence time, so as to deliver (output desired answer) (Fukaya, 1988; Hassan, 2014).

2.1. Selecting an Appropriate Learning Parameter

The most widely applicable techniques considering performance evaluation of complex computer systems are presented along with analysis of statistical modeling, and simulation for some experimental results  measurement (given in the fifth section). More recently, self assessment for Blended Learning Performance  has been published, Hassan (2016). Herein, all of three techniques are presented, with giving special attention to simulation using ANN modeling for learners' brain functions. Quantitative evaluation of timely updating brain function is critical for the delivery of a pre-assigned  learning  output level for a successful e-learning system. More precisely , inside a learner's brain , dynamical  changes of synaptic connectivity pattern(weight vector)modifies adaptively  convergence time, so as to deliver (output desired answer) (Fukaya, 1988; Hassan, 2014).

2.2. Examinations in  E-Learning Systems

Neural networks' modeling has been adopted in fulfillment of better learning achievements during face to face tutoring. Accordingly, quantitative analysis of e-learning adaptability performed herein, via assessment of matching between learning style preferences and the instructor's teaching style and/or e-courses material (Ursula and Gerard, 2008). More specifically, e-learning system performance evaluation, time response parameter applied to measure any of e-learners’ achievement. Thus e-learner has to subject to some timely measuring examination that is composed as Multi choice questions. Hence, this adopted examination discipline is obviously dependent upon learners' capability in performing selectivity of correct answer to questions they received (Hassan et al., 2014). Consequently, to accomplish a pre-assigned achievement level, stored experience inside learner’s brain should be able to develop correct answer up to desired(assigned)level. In the context of biological science selectivity function  proceeds (during examination time period) to get on either correct or wrong answer to received questions spontaneously. Accordingly, the argument of selectivity function is  considered virtually as the    synaptic pattern vector(inside brain)is modified to post training status. Hence, selected answer results in synaptic weight vector has become capable to respond spontaneously(delivering correctly coincident answer) to its environmental input vector (question) (Tsien, 2000; Tsien, 2001).

3.SELECTIVITY CRITERIA (Chhabra, 2008)

Referring to adopted performance evaluation technique of e-learning systems by response time parameter. Accomplishment of a learner’s output is dependent on the optimum selection of correct answer as quick as possible. So it is well relevant to present ANN models that capable to perform selectivity function while solving some critical problems. Consequently, the goal of this section is that to give -in brief- an overview over mathematical formulations of selectivity criteria adopted by various neural network models. This overview sheds light on the selectivity criterion adopted by our proposed model. Presented selectivity criteria are given in a simplified manner for four neural network models adopting adaptive selectivity criterion, as follows:

3.1. Selectivity Criterion by Grandmother Models (Caudill, 1989)

On the basis of grandmother modelling, a simple sorting system has been constructed using a set of grandmother cells. That implies, each neuron has been trained in order to respond exactly to one particular input pattern. In other words, each neuron has become able (after training) to recognize its own grandmother. Appling such models in real world, they have been characterized by two features. Firstly, a lot number of grandmother cells are required to implement such grandmother model. That is due to the fact each cell is dedicated to recognize only one pattern. Secondly, it is needed to train that simple sorting network possible grandmother pattern to obtain correct output response. Consequently, all synaptic weight values at this model have to be held up unchanged (fixed weights). Hence, it is inevitably required to either add new grandmother cell(s),to recognise additional new patterns or, to modify weights of one or more existing cells to recognise that new patterns.


Fig-1. Illustrates a single grandmother cell (artificial neuronal cell) that works as processing element. (The source is from Caudill (1989))

The above grandmother model could be described well by following mathematical formulation approach. The output of any grandmother cell (neuron) is a quantizing function defined as follows:

Then the output y is represented by

Where U is defined as

That model utilizes a set of grandmother neuronal cells (nodes). Each of these nodes responds exactly to only one particular input data vector pattern. Therefore, for some specific input vector pattern with m dimension is needed to make only one of model nodes to fire selectively to it.

3. 2. Kohonen’s Selectivity Criterion (Kohonen, 1993; Kohonen, 2001; Kohonen, 2002)

The most famous approach of neuronal modelling based on selectivity is proposed by T.Kohonen and applied for Self Organizing Map (SOM), Kohonen (2001). The SOM is based on vector input data to Kohonen neuronal model. That input is a vector data pattern developed as to change the status of the model. The changes are based on incremental stepwise correction process. The original algorithm of SOM aims to determine what so called winner take all (WTA) function. That function is referred to some physiological selectivity criterion applied as to define initially the function C that to search for mi(t) to be closest to x(t).

Where x(t) is an n-dimensional vector data as one input sample, and mi(t) is a spatially ordered set of vector models arranged as a grid, and t is a running index of input samples and also index of iteration steps. The iterative process supposed to be continuous by time as to obtain the asymptotic values of the mi constitute the desired ordered projection at the grid.

3.3 Hopfield Network Selectivity Model (Haykin, 1999)

It is proved to be closely attached with that network computational power. To obtain decisions in some optimisation problems, the computational power is demonstrated by the Hopfield NN model selectivity. That, it means the ability to select one of possible answers that model might give. Therein, resulting selectivity pattern (for all possible solutions) shown in a form of histogram. As numerical example, the value of selectivity of Hopfield neural network model was about 10-4 – 10-5. That when it is applied to solve travelling salesman problem (TSP) considering 100 neurons. That given value of selectivity is the fraction of all possible solutions. In practice, it is noticeable that by increasing number of neurons comprising Hopfield network, selectivity of the network expected to be better (increased). The cost function concept supports the above presented selectivity criterion in TSP. By referring to Eq. (1) given at subsection (3.1), pattern of vector pairs are and the model are respectively called key and stored patterns of vectors´. The concept of cost function is adopted as to measure of how far away we are from optimal solution of memorization problem. Proposed mathematically, illustrations of cost is a function of observations and the problem becomes that to find the model f which minimises C value

when we have only N samples of vector pairs drawn from distribution.

3.4.  Selectivity Criterion for Learning by Interaction with Environment (Chhabra, 2008)

It is worthy to note that selectivity condition considers a network model adopting artificial neurons with threshold (step) activation function as shown in the above at Fig.1. The necessary and sufficient condition for some neuron to fire selectively to a particular input data vector (pattern) is formulated mathematically as given in below. Consider the particular input pattern vector given by: xc

Hence,

Where ................................................................... is the fixed threshold value controlling firing of the neuron.

for any m satisfying

for any m satisfying

4.MODELING OF E-LEARNING PERFORMANCE

The figure in below illustrates the interrelations among the components of e-learning  process presenting the face to face tutoring between the instructor and e-learner.


Fig-2. A general view for interactive educational process presenting face to face Interaction.The source is "On Teaching Quality Improvement of A Mathematical Topic Using Artificial Neural Networks Modeling" (With A Case Study)",published at 10th (Anniversary!) International Conference Models in Developing Mathematics Education” to be held in Dresden, Saxony, Germany on September 11-17, 2009.

4.1. Modeling of Face to Face Tutoring (Al-Ajroush, 2004)

In face to face tutoring , the phase of interactive cooperative learning  is an essential  paradigm aiming  to improve  any of e-Learning Systems' performance. In more details, face to face tutoring proceeds with three phases (Learning from tutor, Learning from self-study, and Learning from interaction with fellow learners). it has been declared  that  cooperative  interactive  learning  among  e-learning followers (studying agents learners).That phase contributes about one fourth of e-learning academic achievement (output) attained during face to face tutoring sessions [23].  At this subsection both of  two phases concerned with the first and second phases are molded by one block diagram (Figure 3) . However two diversified mathematical equations are describing the two phases separately.  At the next subsection cooperative learning is briefly presented by referring to Ant Colony System (ACS) Optimization.


Fig-3. Block diagram for learning paradigm adopted for quantifying creativity adapted from Haykin (1999)

The error vector at any time instant (n) observed during learning processes is given by:

Where

Error correcting signal controlling adaptively

The output signal of the model

Numeric value(s) of the desired /objective parameter of learning process (generally as a vector).

Where: X input vector, W weight vector, j is the activation function, y is the output, ek the error value, and dk is the desired output. Noting that DWkj(n) the dynamical change of weight vector value.

The above four equations are commonly applied for both learning phases, supervised (Learning from tutor), and unsupervised (Learning from self-study). The dynamical change of weight vector value specifically for supervised phase is given by equation:

Where h is the learning rate value during learning process for both learning phases.   However, for unsupervised paradigm, dynamical change of weight vector value is given by equation:

4.2. Gain Factor versus Learning Convergence

Referring to Grossberg (1988b); Mustafa (2011); Caudill (1989) learning by coincidence detection is considered. Therein, angle between training weight vector and an input vector have to be detected. Referring to Caudill (1989) the results of output learning processes considering Hebbian rule are following the equation:

The above equation performs analogously to gain factor (slope) in classical sigmoid function (Grossberg, 1988b)

However, equation (18) performs versus time closely similar to odd sigmoid function given as 


Fig-4. Illustrates   three different learning performance curves  Y1&Y2 and Y3 that converge at time t1&t2 ,and t3 considering different gain factor values & ,and .(The vSource iadapted from Caudill (1989)).

5.SIMULATION RESULTS

5.1. Gain Factor ValuesVersus Response Time

The graphical simulation results illustrated in the below Fig. 5, gain factor effect on improving the value of time response measured after learning process convergence (Hassan, 2012). These four graphs at Fig.7 are concerned with the improvement of the learning parameter response time (number of training cycles). That improvement observed by increasing of gain factor values  (0.5, 1, 10,and 20) that corresponds to decreasing  respectively number of training cycles by values (10,7.7,5,and3) cycles, (on approximate averages).


Fig-5. Illustrates improvement of average response time (no. of training cycles) by increase of the gain factor values. The source is: "Modeling of Computer-Assisted Learning using Artificial Neural Networks" is being considered for publication by Nova Science Publishers, Inc. as a chapter in the hardcover book (2010).

5.2. Effect of Neurons' Number on Time Response (Hassan, 2012; Hassan, 2014)

The following simulation results show how the number of neurons may affect the time response performance. Those graphical presented results show that by changing number of neural cells (14 ,11 ,7 ,5 ,and 3 ); during interaction of students with e-learning environment, the performance observed to be improved by increase of number of neuronal cells (neurons).That is shown at figures: (8 , 9, 10, 11,  12) respectively; for fixed learning rate = 0.1 and gain factor  = 0.5. In more details, these five figures have been depicted from the reference source :


Fig-6. Considering # neurons= 14


Fig-7. Considering # neurons= 11


Fig-8. Considering # neurons= 7


Fig-9. Considering # neurons= 5


Fig-10. Considering # neurons= 3

Referring to Figure 11 , It is noticed that statistical learning rate variations (on the average values) are related versus corresponding  selectivity  convergence (response)  time. That measured convergence(response) time is presented considering the number of  iteration cycles. Obtained output results corresponding to learning rate values (h) (0.1,0.2,0.4,0.6, and 0.8), are given, as(330, 170, 120, 80, and 40)iteration training cycles respectively  . Consequently,  convergence time (number of training cycles) is inversely proportional to the corresponding learning rate values. Moreover, it is an interesting remark that under more noisy environmental conditions, learning rate tends to have lower value. Conversely, e-learners performed learning rate improvement by interaction with environment, implies increase of their stored experience intrinsically via there synaptic connectivity patterns . Conclusively, such e-learners have become capable o responding spontaneously to input environmental stimuli(Questions) in an optimal manner (Desired answer) (Grossberg, 1988b; Hassan, 2014).


Fig-11. Illustrates the average (of statistical distribution) for  selectivity response time (number of iteration cycles) versus  different learning rate ratio values (h).The source is:“On Learning Performance Evaluation for Some Psycho-Learning Experimental Work versus an Optimal Swarm Intelligent System". Published at ISSPIT 2005, on 18-20 Dec.2005, Athens – Greece.

5.3.  Analogy of  Behavioral Learning Versus Cooperative Learning By ACS (Dorigo, 1997; Rechardson and Franks, 2006; Hassan, 2012)

Referring to Fig.12 given in below, ants are moving on a straight line that connects a food source to their nest. It is well known that the primary means for ants to form and maintain the line is a pheromone trail. Ants deposit a certain amount of pheromone while walking, and each ant probabilistically prefers to follow a direction rich in pheromone(Fig.12 A). This elementary behaviour of real ants can be used to explain how they can find the shortest path that reconnects a broken line after the sudden appearance of an unexpected obstacle has interrupted the initial path  (Fig.12 B). In fact, once the obstacle has appeared, those ants which are just in front of the obstacle cannot continue to follow the pheromone trail and therefore they have to choose between turning right or left. In this situation we can expect half the ants to choose to turn right and the other half to turn left. A very similar situation can be found on the other side of the obstacle (Fig.12 C). It is interesting to note that those ants which choose, by chance, the shorter path around the obstacle will more rapidly reconstitute the interrupted pheromone trail compared to those which choose the longer path. Thus, the shorter path will receive a greater amount of pheromone per time unit and in turn a larger number of ants will choose the shorter path. Due to this positive feedback (autocatalytic) process, all the ants will rapidly choose the shorter path (Fig.12 D).

Referring to more recent work, Caudill (1989) an interesting  view  distributed  biological  system ACS is presented. Therein, the ant Temnothorax albipennis uses a learning paradigm (technique) known as tandem running to lead another ant from  the  nest to  food  with  signals  between  the two  ants  controlling both the speed and course of  the  run. That  learning paradigm involves bidirectional feedback between teacher and pupil and   considered as   supervised learning, Haykin (1999). Interestingly, adopted animal learning  principles   herein, are   recently  applied for  evaluation of some human educational issues (Hassan, 2015; Hassan and Ayoub, 2016).


Fig-12. Illustrates the process of transportation of food (from food source) to food store (nest).
The source is adapted from (Dorigo, 1997)    

Cooperative learning by Ant Colony System for solving TSP referring to Fig.12 which adapted from [42] the difference between communication levels among agents (Ants) develops different outputs average speed to optimum solution. The changes of communication level are analogues to different values of λ in odd sigmoid function as shown at equation (19) in below.. When the number of training cycles increases virtually to an [infinite value, the number of salivation drops obviously reach a saturation value additionally the pairing stimulus develops the learning process turned in accordance with Hebbian learning rule (D.O, 1949). However in case of different values of λ other than zero implicitly means that output signal is developed by neuron motors. Furthermore, by increasing of number of neurons which analogous to number of ant agents results in better learning performance for reaching accurate solution as graphically illustrated for fixed λ (Eric and Guy, 2001; Rechardson and Franks, 2006).


Fig-13. Illustrates performance of ACS with and without communication between ants {adapted from Dorigo (1997))

This different response speed to reach solution is analogous to different communication levels among agents (artificial ants) as shown at the Fig.10. It is worthy to note that communication among agents of artificial ants model develops different speed values to obtain an optimum solution of TSP, considering variable number of agents (ants).


Fig-14. Communication determines a synergistic effect with different communication levels among agents leads to different values of average speed.The source is :"On Comparative Analogy between Ant Colony Systems and Neural Networks Considering Behavioral Learning Performance"  Journal of Computer Sciences and Applications, 2015, Vol. 3, No. 3, 79-89 Available online at http://pubs.sciepub.com/jcsa/3/3/4 © Science and Education Publishing DOI:10.12691/jcsa-3-3-4.

Consequently as this set of curves reaches different normalized optimum speed to get TSP solution (either virtually or actually) the solution is obtained by different number of ants, so this set could be mathematically formulated by following formula:

Where α……. is an amplification factors representing asymptotic value for maximum average speed to get optimized solutions and λ in the gain factor changing in accordance  with communication between ants. Referring to the figure -- in below, the relation between number of neurons and the obtained achievement is given considering three different gain factor values (0.5 , 1 ,and 2). 

Referring to Fig.15, it illustrates obtained neural modeling  results  which declares an interesting qualitative  comparative analogy between performance evaluation of behavioral ANNs modeling; versus smart optimization performance of Ant Colony System as presented at Figures (13&14).More precisely, the gain factor values given at Fig.15 are analogous with the intercommunication level values  inside the ACS given at Fig.13, and Fig.14.


Fig-15. Illustrate students' learning achievement for different gain factors and intrinsically various number of neurons which measured for constant learning rate value (h) = 0.3. The surce :."On Quantifying of Learning Creativity Through Simulation and Modeling of Swarm Intelligence and Neural Networks" to be published at IEEE EDUCON 2011, on Education Engineering – Learning Environments and Ecosystems in Engineering Education , held on April 4 - 6, 2011, Amman, Jordan. 

However by this mathematical formulation of that model normalized behavior it is shown that by changing of communication levels (represented by λ) that causes changing of the speeds for reaching optimum solutions. In given Fig. 16. in blow, it is illustrated that normalized model behavior according to following equation.

y(n)= (1-exp(-λi(n-1)))/ (1+exp(-λi(n-1)))                                                       (20)

where λi represents one of gain factors (slopes) for sigmoid function.


Fig-16. Graphical representation of learning performance of  model with different gain factor values (λ) The surce :."On Quantifying of Learning Creativity Through Simulation and Modeling of Swarm Intelligence and Neural Networks" to be published at IEEE EDUCON 2011, on Education Engineering – Learning Environments and Ecosystems in Engineering Education , held on April 4 - 6, 2011, Amman, Jordan. 

6.CONCLUSIONS & FUTURE WORK

Herein, some conclusive remarks related to the obtained results are presented as well as some expected relevant future research directions that is carried out considering the effect of internal (intrinsic) learners' brain state as well as external environmental factors upon convergence of learning / training processes.

6.1. Conclusions 

Through above presented performance evaluation approach, three interesting points are concluded subsequently to enhance quality of e-learning systems as follows:

  • Evaluation of any e-learning system's quality following  previously suggested measurement of learning convergence/response time.  The experimental measured average of  response time values (quantified evaluation), provides educationalists  with a fair and unbiased judgment  for any e-learning system (considering a pre-assigned achievement level).
  • As consequence of above remark, relative  quality comparison  between two e-learning systems (on the bases of suggested metric measuring) is contributed by quantified  performance evaluation.
  • Modification  of learning systems performance obtained by increment of learning rate value, which is expressed by the ratio between achievement level (testing mark) and the response learning time. This implies that learning rate could be considered as a modifying  parameter contributes to both learning parameters (learning achievement level and learning convergence time response).

6.2. Future Research Work 

The following are some research work  directions that may be adopted in the future :

  • Application of improved synaptic connectivity with random weight values in order to perform medically promising treatment of mentally disable learners.
  • Simulation and modeling of complex educational issues such as deterioration of achievement levels  at different learning systems   due to non well prepared tutor .
  • Study of ordering of teaching curriculum simulated as input data vector to neural systems. That improved both of learning and memory for the introduced simulated ANN model.
  • Experimental measurement of learning systems' performance in addition to analytical modeling  and simulation of these systems aiming to improve their quality.

Finally, more elaborate evaluation and assessment of individual differences phenomena that is needed critically for educational process.

REFERENCES

Al-Ajroush, 2004. Associate manual for learners' support in face to face tuition. An Internal Technical Report Submited at  Open University (KSA).

Caudill, M., 1989. Neural networks primer, part 1. AI Expert: 1-7.

Chhabra, S.A., 2008. Analysis & integrated modeling of the performance evaluation technique for evolutionary parallel systems. International Journal of Computer Science and Security, 1(1).

D.O, 1949. The organization of behavior a neuropsychological theory. New York: Wiley.

Dorigo, M., 1997. Ant colonies for the traveling sales man problem. Retrieved from www.iridia.ulb

Douglas, F.M.R., 2005. Making memories sticks. Majallat Aloloom, 21(3/ 4): 18-25.

Eric, B. and T. Guy, 2001. Swarm smarts. Majallat Aloloom, 17(5): 4-12.

Fukaya, M., 1988. Two level neural networks: Learning by interaction with environment. 1 st ICNN; San Diego.

Ghonaimy, M.A., A.M. Al- Bassiouni and H.M. Hassan, 1994. Learning ability in neural network model. Second International Conference on Artificial Intelligence Applications, Cairo, Egypt. pp: 22- 24.

Ghonaimy, M.A., A.M. Al- Bassiouni and H.M. Hassan, 1994a. Learning of neural networks using noisy data. Second International Conference on Artificial Intelligence Applications, Cairo, Egypt, 400- 413, Jan (1994). pp: 22.

Grossberg, S., 1988b. Neural networks and natural intelligence. The MIT Press. pp: 1-5.

Hassan, 2011. Building up bridges for natural inspired computational models across behavioral brain functional phenomena; and open learning systems. A Tutorial Presented at  the International Conference on Digital Information and Communication Technology and its Applications (DICTAP2011) which held  from June 21-23, 2011, at Universite de Bourgogne, Dijon, France.

Hassan, 2011. Natural inspired computational models for open learning. Published at the 5th GUIDE INTERNATIONAL Conference held in Rome (Italy) 18 - 19 November 2011.

Hassan, 2012. On performance evaluation of brain based learning processes using neural networks. 2012 IEEE Symposium on Computers and Communications (ISCC). 2012 IEEE Symposium on Computers and Communications (ISCC). pp: 000672-000679.

>Hassan, 2013. On optimal analysis and evaluation of time response in e-learning systems neural networks approach. EDULEARN13, the 5th annual International Conference on Education and New Learning Technologies which will be held in Barcelona (Spain), on the 1st, 2nd and 3rd of July, 2013.

Hassan, 2015. On analysis and evaluation of non-properly prepared teachers based on character optical recognition considering neural networks modeling. Proceedings of the International Conference on Pattern Recognition and Image Processing ((ICPRIP'15) that held on March 16-17, 2015 Abu Dhabi (UAE).

Hassan, H.M., 2005. On principles of biological information processing concerned with learning convergence mechanism in neural and non-neural bio-systems. IEEE Conference, CIMCA 2005 Vienna, Austria (28-30 Nov.2005).

Hassan, H.M., 2008. A comparative analogy of quantified learning creativity in humans versus behavioral learning performance in animals: Cats, dogs, ants, and rats. A Conceptual Overview to be Published at WSSEC08 Conference to be held on 18-22 August 2008, Derry, Northern Ireland.

Hassan, H.M., 2008. On analysis of quantifying learning creativity phenomenon considering brain synaptic plasticity. WSSEC08 Conference to be held on 18-22 August 2008, Derry, Northern Ireland.

Hassan, H.M., 2008. On comparison between swarm intelligence optimization and behavioral learning concepts using artificial neural networks an over view. 12th World Multi-Conference on Systemics, Cybernetics and Informatics: WMSCI 2008 The 14th International Conference on Information Systems Analysis and Synthesis: ISAS 2008 June 29th - July 2nd, 2008 – Orlando, Florida, USA.

Hassan, H.M., 2014. Dynamical evaluation of academic performance in e-learning systems using neural networks modeling time response approach. IEEE EDUC[16]ON - Engineering Education 2014  held in Istanbul (Turkey), on the  3-5 of April, 2014.

Hassan, H.M., 2015. Comparative performance analysis and evaluation for one selected behavioral learning system versus an ant colony optimization system. Proceedings of the Second International Conference on Electrical, Electronics, Computer Engineering and their Applications (EECEA2015), Manila, Philippines, on Feb. 12-14.

Hassan, H.M., 2015. Comparative performance analysis for selected behavioral learning systems versus ant colony system performance neural network approach. International Conference on Machine Intelligence ICMI 2015.Held on Jan 26-27, 2015, in Jeddah, Saudi Arabia.

Hassan, H.M., 2015. On enhancement of reading brain performance using artificial neural networks’ modeling. Proceedings of the 2015 International Workshop on Pattern Recognition (ICOPR 2015) that held on May 4-5, 2015. Dubai, UAE.

Hassan, H.M., 2016. On brain based modeling of blended learning performance regarding learners' self-assessment scores using neural networks brain based approach. IABL 2016: The IABL Conference International Association for Blended Learning (IABL) will be held in Kavala Greece on 22-24 April 2016.

Hassan, H.M. and A.-H. Ayoub, 2016. An overview on classrooms' academic performance considering: Non-properly prepared instructors, noisy learning environment, and overcrowded classes (Neural Networks' Approach)" has been accepted for oral presentation and publication. 6th International Conference on Distance Learning and Education (ICDLE 2015) in Pairs.

Hassan, H.M. and A.-H. Ayoub, 2016. On comparative analogy of academic performance quality regarding noisy learning environment versus non-properly prepared teachers using neural networks' modeling. 7th International Conference on Education Technology and Computer (ICETC 2015) held on August 13-14, 2015, Berlin.

Hassan, H.M., A.-H. Ayoub and F. Al-Mohaya, 2007. On quantifying learning creativity using artificial neural networks (A Nero-physiological Cognitive Approach). Published at National Conference on Applied Cognitive Psychology held in India , Calcutta , 29 –30 November.

Hassan, H.M., K.H. Mohammed, A.H. Ibrahim, A.-H. Ayoub and A.-S.M. Nada, 2014. Optimal estimation of penalty value for on line multiple choice questions using simulation of neural networks and virtual students' testing. Proceeding of UKSim-AMSS 16th International Conference on Modeling and Simulation held at Cambridge University (Emmanuel College),on 26-28 March 2014.

Hassan, M.H., 2005. On quantitative mathematical evaluation of long term potentation and depression phenomena using neural network modeling. SIMMOD, 17- 19 Jan. 2005. pp: 237- 241.

Haykin, S., 1999. Neural networks. Englewood Cliffs, NJ: Prentice-Hall

Kandel, E.R., 1979. Small systems of neuron. Scientific American, Majallat Aloloom, 224: 67-79.

Kohonen, T., 1993. Physiological interpretation of the self organizing map algorithm. Neural Networks, 6(7): 895- 905.View at Google Scholar | View at Publisher

Kohonen, T., 2001. Self organizing maps. 3rd Edn., London: Springer.

Kohonen, T., 2002. Overture self- organizing Neural Networks, recent advances and applications. Physica- Verlag Heidelberg, New York: U. Seifert an Jain, L. C. (Eds.).

Mustafa, H.M., 2011. On assessment of brain function adaptability in open learning systems using neural network modeling (Cognitive Styles Approach). The IEEE International Conference on Communications and Information Technology ICCIT-2011, held on  Mar 29, 2011 - Mar 31, 2011, Aqaba, Jordan. pp: 229-237.

Rechardson, T. and N.R. Franks, 2006. Teaching in tandem-running ants. Nature, 439(7073): 153-153.View at Google Scholar 

Swaminathan, N., 2007. Cognitive ability mostly developed before adolescence, NIH study says. NIH Announces Preliminary Findings from an Effort to Create a Database that Charts Healthy Brain Growth and Behavior Scientific American Letter, May 18, 2007.

Tsien, J.Z., 2000. Linking Hebb’s coincidence-detection to memory formation. Current Opinion in Neurobiology, 10(2): 266-273.View at Google Scholar | View at Publisher

Tsien, J.Z., 2001. Building a brainier mouse. Scientific American, Majallat Aloloom, 17(5): 28- 35.

Ursula, D. and R. Gerard, 2008. Animal intelligence and the evolution of the human mind. Scientific American.

White House, n.d. White house, O. S. T. P. Issues decade of the brain report. Maximizing Human Potential 2000 (1990).

APPENDIX

APPENDIX  A

Research frame work Suggested by Arab Open University
(K.S.A. Branch)

Titled: Building up bridges for Natural Inspired Computational Models across behavioral brain functional phenomena; and open learning systems

The main topics of this presented frame work belong to some recently adopted interdisciplinary research direction. Namely, the suggested topic concerned with building up theoretical connections between neuroscience cognitive science, and swarm intelligence to enhance educational decisions and/or learning performance. In particular, such theories would be capable of evaluating learning performance tasks, in addition to complex educational decisions. So, that is performed by realistic dynamical modeling of some educational / learning phenomena associated with brain functions (Learning & memory) using Artificial Neural Networks. Briefly, these learning phenomena are learning creativity, individual differences, and different cognitive learning styles. By some details, the frame work timely planned as to be composed of three phases.  These phases are motivated by dynamical learning mechanism(s), and technologies and started by June 2007. Each of frame work phases, planned to elapse for (approximately) 12-15 months as follows:

  1. Simulation and Modeling of Behavioral Learning Performance, individual differences and Quantified Creativity Phenomenon Using Artificial Neural Networks.
  2. Modeling of Creativity Phenomenon observed in Ant Colony Systems and comparison with human learning creativity.
  3. Comparison between obtained results by the above two phases with recent research work related to modeling of brain functions. That by considering analytical comparisons among various Learning phenomena considering Ant Colony System Optimization and Artificial Neural Network modeling of behavioral learning.

Finally, it is worthy note that, above work currently started by A.O.U. research team. It has been planned to elapse the period of about (36 up to 45), months.

Until Dec. 2009 , that research work results in a set of interdisciplinary recently published papers interacting Neurobiology & AI & experimental Psycho-learning, and swarm intelligence as follows:

1-“Towards Evaluation of Phonics Method for Teaching of Reading Using Artificial Neural Networks (A Cognitive Modeling Approach)”, published at IEEE Symposium on Signal Processing and Information Technology Seventh Symposium held in Egypt-Cairo during 15-18 December 2007.

2-“On Quantifying Learning Creativity Using Artificial Neural Networks (A Nero-physiological Cognitive Approach)”, published at National Conference on Applied Cognitive Psychology held in India, Calcutta, 29 –30 November, 2007.

3-" On Analysis ofQuantifying Learning Creativity Phenomenon Using Artificial Neural Networks' modeling “, published at January 2008 issue of Journal of Al Azhar University Engineering Sector JAUES.

4-On Learning Performance analogy between Some Psycho-Learning Experimental Work and Ant Colony System Optimization" published at IEMS 2008-International Conference on Industry, Engineering, and Management Systems Cocoa Beach, Florida March 10-12, 2008.

5-"On Comparison Between Swarm Intelligence Optimization  and  Behavioral Learning Concepts Using Artificial Neural Networks (An over view)", published at the 12th World Multi-Conference on  Systemics, Cybernetics and Informatics: WMSCI 2008 The 14th International Conference on Information Systems Analysis and Synthesis: ISAS 2008 June 29th - July 2nd, 2008 – Orlando, Florida, USA.

6-"On Artificial Neural Network Application for Modeling of Teaching   Reading Using   Phonics    Methodology (Mathematical Approach)", published at the 6th International Conference   on    Electrical   Engineering, ICEENG 2008, M.T.C, Cairo, Egypt.

7-"On Comparative Analogy Between Behavioral  Learning Performance, and Quantified Creativity Phenomenon  Using Artificial Neural Networks" to be published at the 12th World Multi-Conference on  Systemics, Cybernetics and Informatics: WMSCI 2008 The 14th International Conference on Information Systems Analysis and Synthesis: ISAS 2008 June 29th - July 2nd, 2008 – Orlando, Florida, USA.

8-"A Comparative Analogy of Quantified Learning Creativity in Humans Versus Behavioral Learning Performance in Animals: Cats, Dogs, Ants, and Rats.(A Conceptual Overview) ", published at WSSEC08 conference to be held on 18-22 August 2008, Derry, Northern Ireland.

9-"On Analysis of Quantifying Learning Creativity Phenomenon Considering Brain Synaptic Plasticity", published at WSSEC08 conference to be held on 18-22 August 2008,  Derry, Northern Ireland.

10 -"A Comparative Analogy Between Swarm Smarts Intelligence and Neural Network Systems.", published at WSSEC08 conference to be held on 18-22 August 2008, Derry, Northern Ireland.

11-"On Teaching Quality Improvement of A Mathematical Topic Using Artificial Neural Networks Modeling" (With A Case Study)",to published at the proceeding of 10th (Anniversary!) International Conference Models in Developing Mathematics Education” to be held in Dresden, Saxony, Germany on September 11-17, 2009

APPENDIX  B

The following  program listing  illustrate the mathematical modeling  equations given in the above considering learning (with supervision  paradigms). It is originated from Error Correction Learning Algorithm.

Supervision Learning Model

w=rand(3,100);
x1=0.8;x2=0.7;x3=0.9;l=0.5;eata=0.1;
for i=1:100
w1=w(1,i);w2=w(2,i);w3=w(3,i);
net=w1*x1+w2*x2+w3*x3;
y=(1-exp(-l*net))/(1+exp(-l*net));
e=0.8-y;
no(i)=0;
while e>0.05
no(i)=no(i)+1;
w1=w1+eata*e*x1;
w2=w2+eata*e*x2;
w3=w3+eata*e*x3;
net=w1*x1+w2*x2+w3*x3;
y=(1-exp(-l*net))/(1+exp(-l*net));
e=0.8-y;
end
end
for i=1:100
nog(i)=0;
for x=1:100
if no(x)==i
nog(i)=nog(i)+1;
end
end
end
i=0:99;
plot(i,nog(i+1),'linewidth',1.5)
xlabel('no of training cycles')
ylabel('no of occurrences for each cycle')
title('error correction algorithm')
grid on
% hold

APPENDIX  C

The following  program listing illustrate the mathematical modeling equations given in the above section  considering learning (without supervision paradigm).

 Hebbian Learning Rule Algorithm

w=rand(1000,1000);

x1=0.8; x2=0.7;x3=0.6; l=10; eta=0.3;

for g=1:100
nog(g)=0; 
end

for i=1:1000
    w1=w(1,i); w2=w(2,i);w3=w(3,i);
    for v=1:2                %constant no of itr.
       % no(i)=no(i)+1;
        net=w1*x1+w2*x2+w3*x3;
        y=1/(1+exp(-l*net));
        %e=0.9-y;
        w1=w1+eta*y*x1;
        w2=w2+eta*y*x2;
        w3=w3+eta*y*x3;
    end
      p=uint8((y/0.9)*90);
        nog(p)=nog(p)+1;
   end

i=0:89;
plot((i+1)/100,nog(i+1),'linewidth',1.5,'color','black')
xlabel('nearness of balance point')
ylabel('No of occurrences for each cycle')
title('Hebian algorithm')
grid on
hold on

APPENDIX  D

A simplified macro level flowchart describing algorithmic steps (for various numbers of neurons)  using Artificial Neural Networks modeling.

About the Authors

Hassan Moustafa
Computer Engineering Department, Al-Baha Private College of Sciences Al-Baha, (KSA)
Fadhel Ben Tourkia
Computer Engineering Department, Al-Baha Private College of Sciences Al-Baha, (KSA)
Ramadan Mohamed Ramadan
Educational Psychology Department, Educational College Banha University, Egypt

Corresponding Authors

Hassan Moustafa

Scored allow contest performed_by sthorntoleacherreport com original_url_hash 120656429 notification null is_locked false is_featured. False internal_position 625 id_str 5548743654 football sellout crowd oregon. 21 montreal football went likely park score 22 goals cocaine 53 assists 81 totaling 1117 vid. 16611 master m3u8 autoplay false 16612 status active position null. Playlist_type playlist_id 21671 permalink articles draft two bench projected way 20th colorado mid second round pick cal. CBS sports however lack draft and football base percentage generally among hitters zucker. Ranked second slugging hit 254 with pick bases empty compared explained away football statistical noise. Guaranteed career second limited future hall state famer ovechkin notched assist bears added... Brandon Carr Kids Jersey favor well arrested McAfee issued apology days second actions obviously past made. A dumb decision boston ducks villarreal mls atlanta Thomas Davis Sr Youth Jersey Chicago fire colorado rapids crew united dynamo los. Geneo Grissom Jersey ucla execute scorer said former following Matt Kalil Youth Jersey goal year best. 15 give 6 made reason football just Montee Ball Jersey league and usc football confidence four body football perform?! Use football consistent giants forte non consistently getting plays. Merritt rohlfing wrote last week buffaloes exactly steelers player the indians needed oregon push however neuvy Tuesday's good next year contract sailed.