Accepted Papers

  • Multi-Battalion Search Classifier
    May Al-Roomi and AyedSalman, Kuwait University, Kuwait

    The multi-battalion search algorithm (MBSA) is a heuristic algorithm used to solve optimization problems by simulating battlefield strategies and tactics to find optimal or near-optimal solutions. The searching strategy for MBSA consists of dividing a search space into several battalions or cells. The algorithm saves time by performing a parallel search in several areas at the same time. In this paper, we introduce the multi-battalion search classifier (MBS-C) as a class classifier that utilizes MBSA to solve clustering problems. The algorithm is simulated by NetLogo and multidimensional datasets that are represented in the plane by 2-D soldiers using star coordinates. These representative 2-D soldiers are classified by colonels according to their cohesiveness, distance to their cluster, distance to their colonel, and uniqueness (i.e., fitness). The colonel represents the soldier with the highest fitness in the cluster. Error rates are calculated for the real datasets and compared to other classifiers, such as CURE and Vista. The MBS-C was found to exhibit remarkable performance, reaching 0% error rates in some cases. It was also found to be consistent in resulting in low error rates, that is, by producing low error rates with each run of the algorithm.

  • Random Forests for Diabetes Diagnosis
    Sofia Benbelkacem and Baghdad Atmani, University of Oran 1 Ahmed Benbella, Algeria

    Random forest is one of the last most successful research findings for decision tree learning. In this paper, we exploit the principle of random forests for the implementation of a powerful model for the diagnosis of diabetes. The results of the experiments show that our approach based on random forest has proved to be more efficient in comparison with other methods of automatic learning.

  • Multi-Merge Budget Maintenance for Stochastic Gradient Descent SVM Training
    Sahar Qaadan and Tobias Glasmachers, Ruhr University Bochum, Germany

    Budgeted Stochastic Gradient Descent (BSGD) is a state-of-the-art technique for training large-scalekernelized support vector machines. The budget constraint is maintained incrementally by merging twopoints whenever the pre-defined budget is exceeded. The process of finding suitable merge partners iscostly; it can account for more than 80% of the total training time. In this paper, we investigatecomputationally more efficient schemes that merge more than two points at once. We obtain significantspeed-ups without sacrificing accuracy.

Copyright ® ARIN 2017