Document Type : Original Research

Authors

1 MSc, Department of Computer Engineering and Information Technology, Shiraz University of Technology, Shiraz, Iran

2 PhD, Department of Computer Engineering and Information Technology, Shiraz University of Technology, Shiraz, Iran

3 MD, Clinical Neurology Research Center, Shiraz University of Medical Sciences, Shiraz, Iran

4 PhD, Department of Health Information Management, Shiraz University of Medical Sciences, Shiraz, Iran

Abstract

Introduction: Status epilepticus is one of the most common emergency neurological conditions with high morbidity and mortality. The study aims is to propose an intelligent approach to determine prognosis and the most common causes and outcomes based on clinical symptoms.Material and Methods: A perceptron artificial neural network was used to predict the outcome of patients with status epilepticus on discharge. But this method, which is understandable, is known as black boxes. Therefore, some rules were extracted from it in this study. The case study of this paper is data of Nemazee hospital’s patients.Results: The proposed model was prognosticated with 70% accuracy, while Bayesian network and Random Forest approaches have 51% and 46% accuracy. According to the results, recovery and mortality groups had often used phenytoin and anesthetic drugs as seizure controlling drug, respectively. Moreover, drug withdrawal and cerebral infarction were known as the most common etiology for recovery and mortality groups, respectively and there was a relationship between age and outcome, like as previous studies.Conclusion: To identify some factors affecting the outcome such as withdrawal, their effects either can be avoided or can use sensitive treatment for patients with poor prognosis.

Keywords

Introduction

Status epilepticus (SE) is a neurologic disorder with high mortality and morbidity; defined as a long time and non-stopping seizure, or two or more discrete seizures without complete recovery of consciousness between them [ 1 , 2 ]. Some studies have reported that the duration of seizure lasts 20-30 minutes, but others have said more than 5 minutes [ 3 ]. The risk of this clinical situation is its long attacks since experiments on the brains of adult monkeys have indicated that a continuous attack lasting for about 45 to 60 minutes is sufficient to damage the nerve cells [ 4 ]. Owing to high mortality and morbidity of SE patients, there is an essential need to intelligent approaches to determine the prognosis of these patients at the time of discharge, one of which is data mining. Data mining is an analyzing process of the data obtained from different perspectives by summarizing it to new useful information. On the other hand, in spite of the fact that it is a way with the least user intervention, it is able to express the logical relations between data [ 5 ]. One of the most accurate and reliable techniques of data mining is Artificial Neural Network (ANN) which solves the problems with no algorithmic solution or very complex algorithmic solution. Despite its high accuracy, ANN is known as a black box and can’t express and interpret the behavior of the model and the reason of prediction [ 6 ]. In recent years, many studies have been carried out on rule extracting from ANN [ 6 ]. They first selected the appropriate structure for the network and trained it and then pruned the created model to reduce the connections and neurons. Next, the outputs of the hidden layer are discretized in order to be extracted from them and finally pruned and generalized the specific rules. The use of ANN in neurology includes analysis of Electroencephalography (EEG) signals test for seizure detection because the evaluation of these signals is very time-consuming and tedious. [ 4 , 7 , 8 ]. We found a study about SE prognosis based on the clinical symptoms by using intelligent approaches that it evaluate the underlying etiologic factors of epilepsy patients and predict the prognosis of these patients by using a Multi-Layer Perceptron Neural Network (MLPNN) based on risk factors. The results showed that the most important risk factor of epilepsy was related to some items, including the febrile seizure, the parents’ kinship, the history of epilepsy in relatives and the history of head trauma. The correct prediction rate for detection of the prognosis was 91.1% by using the MLPNN algorithm [ 9 ]. Moreover, there are some studies based on statistical methods [ 10 - 13 ]. In these statistical studies, the frequency of symptoms for each group has only been introduced, and none of them are based on prognostication.

However, many studies were carried out about SE in Iran, particularly in pediatric patients [ 14 - 17 ], there is a gap for adults [ 10 , 18 ] The proposed method in this paper provides concise rules instead of existing weights in network structure, which is easily checked by an expert, and finally presents a new perspective of the data to system users. In fact, this study aims to determine the prognosis for adult patients with SE and the most important causes of the seizures via interpretable ANN, through symptoms without results of the EEG or Magnetic Resonance Imaging (MRI) test like some other studies [ 13 ].

The proposed method has been tested by some well-known UCI Machine Learning Repository datasets. Therefore, an application was proposed to manage SE patients data and determine the prognosis of their outcome on discharge time with ANN approach in this study. Next, we generated a set of rules similar to the approach proposed by Kamruzzaman et al., [ 6 ].

Material and Methods

In this descriptive-analytic study, the major steps of the proposed method are summarized in Figure 1 and explained further in subsections.

Figure 1. Flowchart of the proposed method

Data collection and preprocessing

The subjects of this study were the adult patients with SE (either convulsive or non-convulsive) who were admitted in Nemazee hospital (Shiraz, south of Iran) during January 2006 to February 2012. This research is carried out on patients’ records, retrospectively. The data were recorded after admission of SE patients, during the hospital course by a daily questionnaire. The patients may be discharged or died by now. The medical research ethics committee of Shiraz University of Medical Science approved the study protocol (approval number: 55-4082). The duration of seizure in these patients was at least 20 minutes or at least two seizure attacks without returning to normal level of consciousness between them for convulsive SE (the definition of SE at that time); furthermore, clinically unrecognized seizure by evidence of SE in EEG for non-convulsive SE. These data include 134 records and 12 fields including gender, age, duration of epilepsy, the cause of the epilepsy, seizure occurrence in the last six months, prior medication, status type, etiology, course of disease, seizure-controlling drugs, duration of the hospitalization and patients outcome on discharge with four values of mortality, severe disability, moderate disability and good recovery. The patient’s outcome on discharge was based on Glasgow Outcome Scale (GOS) which is a scale to divide patients with brain injuries into five groups such as death, persistent vegetative state, severe disability, moderate disability and low disability [ 19 ]. However, owing to the low count of patients in group 2, this group was merged with group 3. Indeed, patients were classified into three age groups, including 19-39, 40-59, and ≥ 60 years. Although, 134 instances are insufficient for data mining techniques, we wanted the results be based on the real statistics in Iran. Thus, we didn’t simulate instances. Table 1 shows the collection of the attributes with a set of the possible values.

Attribute Name Values
Gender Male, Female
Age, (years) (19-39), (40-59, (60 and more )
Duration of Epilepsy (years) (0), (≤5), (>5)
Cause of Epilepsy Secondary, Idiopathic, Unknown
Seizures in last 6 months Yes, No
Prior Medication Carbamazepine, Sodium Valproate, Phenytoin, Lamotrigine, Phenobarbital, Topir-amate
Status Epilepticus Type Convulsive, Myoclonus, Non Convulsive
Etiology Medication Withdrawal, Metabolic Abnormalities, Tumor, Brain Infection, Trauma, Hypoxic, Cerebral Infarction, Cerebral Venous Thrombosis, Drug/Substance Abuse, Multiple Sclerosis
Course of Disease Acute, Non Acute
Seizure-Controlling Drugs Phenytoin, Depakin, Phenobarbital, Anesthetic, Others
Duration of Hospitalization (2-100) days
Glasgow Outcome Scale (GOS) Mortality, Severe Disability, Moderate Disability, Good Recovery
Table 1. The collection of the data and values

In order to use the ANN, all data must be converted from nominal status to numerical status. Therefore, each value of each field is converted to a number. Then all the values were normal in the interval through Min-Max method as described in Equation (1), since the ANN training is very sensitive and the mapping of inputs to outputs is totally dependent on the inputs.

Normalize(X)=X-min(X)max(X)-min(X)×(NewMax-NewMin)+NewMin (1)

Where min(x) and max(x) are the minimum and maximum values for x, respectively; and here, NewMax and NewMin are 0 and 1, respectively.

Creating ANN architecture and model training

In this study, in order to create the ANN model, the Neural Network toolbox from MATLAB R2011a was used. Thus a 2-layer perceptron network was used, including 11 inputs and 1 output neuron. To determine the number of hidden neurons and the training and transfer functions, the trial and error methods were used. As a result, the performance of the model for different settings was evaluated through Mean Squared Error (MSE) performance function which is available in Equation (2):

MSE=1Ni=1N(ti-oi)2 (2)

Where N is the total number of examples, ti is the desired output and oi is the actual output of the network. Therefore, the performance of the model was calculated by 4, 5, 6, …, 10 neurons in the hidden layer. Owing to a lot of tests, choosing hidden neurons of 4 changes to 10; the training network, with fewer than four hidden neurons, was not carried out properly; on the other hand, because of the low number of training samples, the network should not be larger than this. Moreover, data was divided into three portions, including 80% of data for training, 10% for validation and 10% for the test model. In order to have more reliability, 10-Fold cross validation was also used. Therefore, the data set was divided into ten subsets, and the training was repeated ten times. Each time, one of the ten subsets was used as the test set, and one of the other nine subsets was used as the validation set and the other eight subsets were used as the training set. Then the average MSE of all ten trials was computed.

The performance of the model with tansig and logsig transfer functions for hidden layers and traingdm, trainscg, trainlm, trainbr and trainrp training functions was calculated. Thus the results of the 700 iterations of the training are available in Table 2 and lead to using eight hidden neurons, the trainscg training method, tansig transfer function for hidden layers and linear transfer functions for output layer had the best performance and the least MSE equal to 0.0499.

Training Function Trainrp Trainbr Trainlm Trainscg Traingdm
Transfer Function Tansig Logsig Tansig Logsig Tansig Logsig Tansig Logsig Tansig Logsig
The number of hidden neurons 4 0.0975 0.098 0.0844 0.0841 0.1403 0.1138 0.0948 0.098 0.1877 0.1055
5 0.1132 0.0969 0.0844 0.0841 0.1361 0.1556 0.0741 0.0947 0.2582 0.1163
6 0.0956 0.0966 0.0844 0.0841 0.2660 0.1103 0.0623 0.0959 0.3722 0.1236
7 0.1015 0.0963 0.0844 0.0842 0.3130 0.1156 0.0615 0.1038 0.2348 0.1263
8 0.1215 0.097 0.0844 0.0841 0.1058 0.2071 0.0499 0.1051 0.2777 0.1316
9 0.1051 0.1006 0.0844 0.0841 0.4266 0.1939 0.0632 0.1169 0.3793 0.1631
10 0.1158 0.0957 0.0844 0.0841 0.1773 0.3347 0.1027 0.1067 0.5806 0.1446
Table 2. The performance evaluation of the 700 iterations of the training

Generating rules

There are different techniques for rule extraction from ANN; one of which is decomposition method, including analysis of each hidden neuron and their connections, and finally aggregating the rules extracted at the individual levels from a composite rule set [ 20 ]. Therefore, after training the model, the network structure was analyzed and the activation value of each hidden neuron and the output neuron were calculated based on the inputs. This is done by Equation (3) and Equation (4).

net=i=1n(XiYi)b (3)

O=TransferFunction(net) (4)

Where in Equation (3), n is the number of inputs of each neuron, wi is the connection weight and xi is the input value of each neuron, and b is the bias of each hidden and output neuron. Then in Equation (4), transfer function was applied on the last value. Next, the activation values of hidden neurons obtained in the previous step were discrete in order to create the rules. To do this, the values greater than and equal to 0.5 were considered to 1 and the others 0. Then for each hidden neuron, a truth table was formed whose inputs were the input values of database fed to the network. Then a logic function for each hidden neuron was calculated in terms of inputs. The same thing was done for the output neuron, but this time, discrete activation values of the hidden neurons were the inputs of the truth table. Finally, rules were generated by combining the two sets of rules and as a result the input mapping to output was provided in human-readable form of a set of rules. This method was tested for the Exclusive-OR (XOR) function with two, three and four bits and provided accurate results.

To illustrate more, the rule generation process for XOR function with two bits was explained. The arbitrary ANN structure consists of two input neurons, including X1 and X2 and three hidden neurons, including H1, H2, H3 and one output neuron and their biases.

The activation values of each hidden and output neuron were calculated according to the Equation (3) and Equation (4) and then were discrete. Then the truth tables were formed and logic function was calculated for each hidden neuron, separately. The results of this step are shown in Figure 2a. Then one the truth table was formed for output neuron in term of discrete activation hidden neurons. The results of this step are shown in Figure 2b. Finally, all of obtained rules were combined. This combination is shown in Figure 2c.

Figure 2. Generating rules for Exclusive-OR (XOR) Function. A: Calculating the logic function for each hidden neuron in term of input neurons; B: Calculating the logic function for output neuron in term of hidden neurons; C: Generating rules for output neuron in term of Artifcial Neural Networks (ANN’s) input.

On the other hand, the extracted rules for XOR function for two bits were interpreted as: If (X1=1 and X2=0) Or (X1=0 And X2=1) Then O=1; Else O=0.

Pruning and generalization rules

The extracted rules were completely specific. Therefore, generalization was made to cover the new training examples. After generating the rules, some of the rules that led to misclassification were pruned and then the rules of each class were generalized by the proposed algorithm in this paper as shown in Figure 3.

Figure 3. The proposed algorithm for rules generalization

As shown in Figure 3, the proposed algorithm gets the set of extracted and pruned rules R as inputs and S is rule of each class which we have four of them. Then inconsistent values of conditions or attributes s of rules in S become consistent or general, separately. To do this, each rule in S named d is checked. For each attribute ai in S, if ai is a binary attribute and is inconsistent with ai in d, ai will be removed from S. For example, gender, seizures in last six months and course of disease are binary and also if they have inconsistence values, they will be removed. For each attribute ai in S, if ai is a numeral attribute and greater or lesser than ai in d, the data range of ai in S will be expanded.The numeral attributes are age, duration of epilepsy and hospitalization. For example, hospitalization has a variety of values for different instances in each class and after the generalization, the data range is expanded for example between 20 and 40 days. For each attribute ai in S, if ai is a categorical attribute, ai in S will be replaced by some of the most frequent items. The categorical fields are made due to epilepsy, prior medication, status epilepticus type, etiology, seizure-controlling drugs. With generalization, some (at most three) values of the most frequent items for each class are extracted. For example, medication withdrawal and hypoxic are finding as the most frequent items for the etiology field.

Integration of all systems

To easily use the proposed method to prognosticate and identify the most important factors by physicians, an application was developed. In fact, this application uses the weights of the trained network and extracted rules.

Results

The obtained results in this study consist of two parts, including the results of the model training in prognostication and the extracted rules which are described below.

The results of the model training in prognostication

Training automatically stopped when generalization ceased improving as indicated by an increase in MSE of validation samples. The results of applying the ANN methodology to distinguish between different outcomes of the patients with SE showed very good capability of the network to learn the patterns corresponding to symptoms of the patients with MSE 0.05. The network was simulated in the testing set (i.e. cases of the network has not seen before). The results were satisfied; the network was able to be classified by MSE 0.12. Moreover, the best validation performance was 0.125 at epoch 237 as shown in Figure 4.

Figure 4. The best validation performance

In Figure 4 the blue, red and green lines are representing a decrease MSE data for training, testing and validation data, respectively. The high proximity of the validation and testing lines is related to good data distribution and good training of the network [ 21 ]. To prove this, the confidence interval (CI) for validation and testing data was calculated by an accuracy of 99%. In every line, there were 13 instances according to the distribution of data. We calculated the mean performance of trained sample (using the perform command in MATLAB), standard deviation and t-test due to the 13 instances. Then the confidence intervals were calculated with 99% accuracy. The goal of this work is to investigate proximity of two lines at epoch 237, where the training is stopped. If the obtained CI for two lines overlap, it could be said that lines are not different, with 99% confidence and are quiet proximate. The calculation of the CI with t test was performed through Equation (5) and the calculations of the CI for the testing and validation lines are available in Equation (6) and Equation (7), respectively. Where in Equation (5) X- is sample mean of data, (s÷n) is standard deviation of data, α=0.01 for 99% confidenceand n=13 is the count of samples.

Confidence Interval=X-±t[1-α1;n-1]×(s÷n) (5)

TestSetCI=0.1372±3.055×0.2618=(-0.6626 0.9370) (6)

ValidationSetCI=0.1259±3.055×0.1190=(-0.2376 0.4894) (7)

According to the Equation (6) and Equation (7), the whole of validation CI is in testing CI and this is quite overlapping of two lines. Therefore, it could be said that lines are proximate and the model was trained well. In this study, for further analysis of the model, accuracy, precision and recall measures were used; they are available in Equation (8), Equation (9) and Equation (10).

Accuracr=tp+tntp+fp+fn+tn=913=0.6923×100%70% (8)

Precisn=tptp+fp=33+12+35+234=0.6916×100%70% (9)

Recall=tptp+fn=35+11+34+234=0.7541×100%75% (10)

Where in Equation (8), Equation (9) and Equation (10), tp, tn, fp and fn are true positive, true negative, false positive and false negative cases, respectively.

The results of the extracted rules

In this study, all feeding data on the ANN were converted to the rules as it was done for XOR function in section 2.3. Then the extracted rules were generalized by proposed algorithm in Figure 3 in 2.4 sections. All the gained outputs were in range. In order to produce understandable rules, all values were denormalized to the real and primary range by [ 1 ]. Then for nominal fields, the numeral values were converted to nominal values. Moreover, the rules were reformed in the form of If-then for easier understanding. These rules are available on the Figure 5a.

Figure 5. The Result of the extracted rule set from the model. a: Primary extracted rules; b: Final extracted rules.

But, in Figure 5a can be seen some of the conditions have included all of the range and this are repeated in every four rules such as duration of epilepsy, cause of epilepsy, status type and hospitalization. Thus we can say that these conditions do not make differentiation between different groups and should be ignored. By removing these four conditions, the final obtained rules are shown in Figure 5b. In addition, in Figure 5b, the difference in the order of the values for the conditions in various rules cause to have the difference in their repetition rate such as prior-medication, etiology and seizures-controlling drug. For example, first group phenobarbital is more frequent than depakin and phenytoin as prior medication. However, different conditions don’t have any priority in comparison with each other.

Discussion

As shown in Figure 5b, the first rule is about the patients with mortality GOS. They include all of the adult patients with SE whose previous treatments had been phenobarbital, depakin or phenytoin. Furthermore, the most important etiologies for this group were cerebral infarction, brain infection and hypoxia, respectively, and seizures management was possible with anesthetic drugs, phenytoin or depakin. It should be noted that the use of anesthetic drugs as seizure controlling drug is more common in this group than other groups. Anesthetic drugs are the most aggressive medications for status epilepticus and usually used for refractory kinds. Therefore, the more fatal underlying causes the more probable use of them.

The second rule is related to patients with severe disability GOS. They include all adults with phenytoin and carbamazepine as previous treatments. The most important etiologies of their seizures were cerebral infarction, brain infection and drug withdrawal, respectively; the seizure was controlled by phenytoin and anesthetic drugs. It is cclear that in this group, the use of phenytoin as seizure management is more than anesthetic drugs in comparison with the prior group.

The third rule is about patients with moderate disability GOS. However, they also consist of all adult patients, this time previous treatments include sodium valproate and carbamazepine. The most important etiologies for this group were antiepileptic drugs withdrawal, tumor and metabolic problems, respectively. Seizure management of this group was possible through Phenytoin, anesthetic drugs and phenobarbital.

Finally, the fourth rule is for patients with good recovery GOS. They comprise those adults younger than 60 years with sodium valproate and carbamazepine as previous treatments. The most important etiologies of seizure were antiepileptic drugs withdrawal and drug/substance abuse and their seizures were only controlled by phenytoin.

According to the description, it can be concluded that there is less probability for younger patients to die and they usually recover. Previous treatments for people with high brain injuries (i.e. groups 1 and 2), were mostly phenobarbital, depakin and phenytoin, and the other two groups who suffered less injuries had used sodium valproate and carbamazepine and also cerebral infarction (stroke) was identified as the main etiology for the first and second groups; thus it is considered as a poor prognosis for those patients. Furthermore, drugs withdrawal and drug/substance abuse were identified as the main etiologies for the third and fourth groups, and it is a good prognosis for them. Since drug factors are preventable with practical education and awareness of the patients with SE, thus they can prevent from brain injuries.

It is noticeable that anesthetic drugs were mostly used to control seizures for people who died, and with decrease of the brain injuries, taking these drugs decreased. Therefore, for those who recovered, the use of these drugs is minimized. The relationship between age and outcome has been identified previously, and cerebral infarction (stroke) and drugs withdrawal were introduced as the main etiology (Poursadeghfard et al., 2014) [ 10 ]. Above all, older patients had a higher risk of death in comparison with younger ones, and stroke and tumor were poor prognosis (Rossetti et al., 2006) [ 13 ]; however, Poursadeghfard et al. (2014) and Rossetti et al. (2006) did not say anything about prior medication or seizer-controlling drugs.

Thus, the ANN method could be classified and predict with about 70% accuracy. Although 70% accuracy may not be acceptable in the medical field, due to the nature of the problem and the shortage of data and also based on the results obtained from implementation other data mining algorithms such as Bayesian network and decision tree, i.e. random forest, which have great abilities to predict and generate rules, it could be argued that the proposed method has provided higher accuracy. The results of implementation of Bayesian network and random forest methods, on same data and 10-fold cross validation in Weka software, are shown in Table 3. In the random forest method, the maximum depth of the trees was infinitive, and the number of trees generated was 10 and the random number seed used was 1 and, in the Bayesian network, simple estimator was used for estimating the conditional probability tables of bayes network and also hill climbing algorithm was used for searching algorithm.

Method Accuracy % Precision % Recall %
ANN 70 70 75
Bayesian Network 51 50 50
Random Forest 46 45 45
ANN: Artifcial Neural Network
Table 3. Performance evaluation of some methods of data mining

Therefore, according to the results of other data mining techniques in Table 3, we could say the ANN approach is better; but it is likely that the combination of ANN with the cat swarms optimization (Yusiong, 2012) [ 22 ]. Algorithm will provide better results and it must be tested in the future works.

Conclusion

In this study, an application was proposed to manage the SE patients’ data, and determine the prognosis of their outcome on discharge via an intelligent rule-based method, and also identify the most important influential factors. As noted previously, intelligent approaches about the SE included the analysis of EEG signals tests and there are not any intelligent methods based on the symptoms of the SE. The proposed method was able to predict the outcome of patients with SE on discharge which is accurately close to 70%. Therefore, if the prognosis was poor, treatment would be more sensitive. In addition, the course of the disease, prior medication, seizure-controlling drugs and other etiology values were identified as important factors besides age, medication withdrawal and cerebral infarction (stroke) introduced finally, the ANN approach could provide higher accuracy than other methods of data mining.

References

  1. Lowenstein D H, Alldredge B K. Status epilepticus. N Engl J Med. 1998; 338:970-6. DOI | PubMed
  2. Watson C. Status epilepticus. Clinical features, pathophysiology, and treatment. West J Med 1991; 155:626-31. Publisher Full Text | PubMed
  3. Moayedi A, Atashabparvar A, Eftekhari E. Status epilepticus: etiology, outcome and predictors of mortality. Iranian Journal of Child Neurology. 2007; 2:19-23.
  4. Riviello Jr J J, Ashwal S, Hirtz D, Glauser T, Ballaban-Gil K, Kelley K, et al. Practice parameter: diagnostic assessment of the child with status epilepticus (an evidence-based review): report of the Quality Standards Subcommittee of the American Academy of Neurology and the Practice Committee of the Child Neurology Society. Neurology. 2006; 67:1542-50. DOI | PubMed
  5. Jackson J. Data mining; a conceptual overview. Communications of the Association for Information Systems. 2002; 8:19.
  6. Kamruzzaman S, Islam M. An algorithm to extract rules from artificial neural networks for medical diagnosis problems. arXiv preprint arXiv:10094566. 2010 ; 12(8):41-59.
  7. Automatic detection of epileptic spike using fuzzy ARTMAP neural network. Proceedings of the 10th WSEAS international conference on Signal processing, computational geometry and artificial vision; Taipei, Taiwan: World Scientific and Engineering Academy and Society (WSEAS); 2010.
  8. Sukanesh R, Harikumar R. A Comparison of Genetic Algorithm & Neural Network (MLP) In Patient Specific Classification of Epilepsy Risk Levels from EEG Signals. Engineering Letters. 2007; 14(1)
  9. Aslan K, Bozdemir H, Sahin C, Noyan Ogulata S. Can neural network able to estimate the prognosis of epilepsy patients according to risk factors?. J Med Syst. 2010; 34:541-50. DOI | PubMed
  10. Poursadeghfard M, Hashemzehi Z, Ashjazadeh N. Status Epilepticus in Adults: A 6-Year Retrospective Study. Galen Medical Journal. 2014; 3:153-59.
  11. Holtkamp M, Othman J, Buchheim K, Meierkord H. Predictors and prognosis of refractory status epilepticus treated in a neurological intensive care unit. J Neurol Neurosurg Psychiatry. 2005; 76:534-9. DOI
  12. Kwong KL, Chang K, Lam SY. Features predicting adverse outcomes of status epilepticus in childhood. Hong Kong Med J. 2004; 10:156-9. PubMed
  13. Rossetti A, Hurwitz S, Logroscino G, Bromfield E. Prognosis of status epilepticus: role of aetiology, age, and consciousness impairment at presentation. J Neurol Neurosurg Psychiatry. 2006; 77:611-5. DOI
  14. Adibeik B. Status epilepticus: a review. Iranian Journal of Child Neurology. 2008; 2:7-14.
  15. Akhondian J, Heydarian F, Jafari S A. Predictive factors of pediatric intractable seizures. Arch Iran Med. 2006; 9:236-9. PubMed
  16. Ashrafi M R. Status Epilepticus. Iranian Journal of Pediatrics. 2000; 10:204-17.
  17. Moayedi A, Atashabparvar A, Eftekhari E. Status epilepticus: etiology, outcome and predictors of mortality. Iranian Journal of Child Neurology. 2007; 2:19-23.
  18. Tabatabaei S S, Delbari A, Salman-Roghani R, Shahgholi L, Fadayevatan R, Mokhber N, et al. Seizures and epilepsy in elderly patients of an urban area of Iran: clinical manifestation, differential diagnosis, etiology, and epilepsy subtypes. Neurol Sci. 2013; 34:1441-6. DOI | PubMed
  19. Jennett B, Bond M. Assessment of outcome after severe brain damage. Lancet. 1975; 1:480-4. PubMed
  20. Huysmans J, Baesens B, Vanthienen J. Using rule extraction to improve the comprehensibility of predictive models. Behavioral & Experimental Economics. 2006; :1-55.
  21. Biswas S K, Mia M M A. Image Reconstruction Using Multi Layer Perceptron (MLP) And Support Vector Machine (SVM) Classifier And Study Of Classification Accuracy. International Journal of Scientific & Technology Research. 2015; 4:226-31.
  22. Yusiong J P T. Optimizing artificial neural networks using cat swarm optimization algorithm. International Journal of Intelligent Systems and Applications. 2012; 5:69-80.