Machine Learning Latest Submitted Preprints | 2019-07-01

in #machine5 years ago

Machine Learning


MLFriend: Interactive Prediction Task Recommendation for Event-Driven Time-Series Data (1906.12348v1)

Lei Xu, Shubhra Kanti Karmaker Santu, Kalyan Veeramachaneni

2019-06-28

Most automation in machine learning focuses on model selection and hyper parameter tuning, and many overlook the challenge of automatically defining predictive tasks. We still heavily rely on human experts to define prediction tasks, and generate labels by aggregating raw data. In this paper, we tackle the challenge of defining useful prediction problems on event-driven time-series data. We introduce MLFriend to address this challenge. MLFriend first generates all possible prediction tasks under a predefined space, then interacts with a data scientist to learn the context of the data and recommend good prediction tasks from all the tasks in the space. We evaluate our system on three different datasets and generate a total of 2885 prediction tasks and solve them. Out of these 722 were deemed useful by expert data scientists. We also show that an automatic prediction task discovery system is able to identify top 10 tasks that a user may like within a batch of 100 tasks.

Extending Adversarial Attacks and Defenses to Deep 3D Point Cloud Classifiers (1901.03006v4)

Daniel Liu, Ronald Yu, Hao Su

2019-01-10

3D object classification and segmentation using deep neural networks has been extremely successful. As the problem of identifying 3D objects has many safety-critical applications, the neural networks have to be robust against adversarial changes to the input data set. There is a growing body of research on generating human-imperceptible adversarial attacks and defenses against them in the 2D image classification domain. However, 3D objects have various differences with 2D images, and this specific domain has not been rigorously studied so far. We present a preliminary evaluation of adversarial attacks on deep 3D point cloud classifiers, namely PointNet and PointNet++, by evaluating both white-box and black-box adversarial attacks that were proposed for 2D images and extending those attacks to reduce the perceptibility of the perturbations in 3D space. We also show the high effectiveness of simple defenses against those attacks by proposing new defenses that exploit the unique structure of 3D point clouds. Finally, we attempt to explain the effectiveness of the defenses through the intrinsic structures of both the point clouds and the neural network architectures. Overall, we find that networks that process 3D point cloud data are weak to adversarial attacks, but they are also more easily defensible compared to 2D image classifiers. Our investigation will provide the groundwork for future studies on improving the robustness of deep neural networks that handle 3D data.

Asymptotic Network Independence in Distributed Optimization for Machine Learning (1906.12345v1)

Alex Olshevsky, Ioannis Ch. Paschalidis, Shi Pu

2019-06-28

We provide a discussion of several recent results which have overcome a key barrier in distributed optimization for machine learning. Our focus is the so-called network independence property, which is achieved whenever a distributed method executed over a network of nodes achieves comparable performance to a centralized method with the same computational power as the entire network. We explain this property through an example involving of training ML models and sketch a short mathematical analysis.

Hierarchical Attentional Hybrid Neural Networks for Document Classification (1901.06610v2)

Jader Abreu, Luis Fred, David Macêdo, Cleber Zanchettin

2019-01-20

Document classification is a challenging task with important applications. The deep learning approaches to the problem have gained much attention recently. Despite the progress, the proposed models do not incorporate the knowledge of the document structure in the architecture efficiently and not take into account the contexting importance of words and sentences. In this paper, we propose a new approach based on a combination of convolutional neural networks, gated recurrent units, and attention mechanisms for document classification tasks. The main contribution of this work is the use of convolution layers to extract more meaningful, generalizable and abstract features by the hierarchical representation. The proposed method in this paper improves the results of the current attention-based approaches for document classification.

Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty (1906.12340v1)

Dan Hendrycks, Mantas Mazeika, Saurav Kadavath, Dawn Song

2019-06-28

Self-supervision provides effective representations for downstream tasks without requiring labels. However, existing approaches lag behind fully supervised training and are often not thought beneficial beyond obviating the need for annotations. We find that self-supervision can benefit robustness in a variety of ways, including robustness to adversarial examples, label corruption, and common input corruptions. Additionally, self-supervision greatly benefits out-of-distribution detection on difficult, near-distribution outliers, so much so that it exceeds the performance of fully supervised methods. These results demonstrate the promise of self-supervision for improving robustness and uncertainty estimation and establish these tasks as new axes of evaluation for future self-supervised learning research.

The Impact of Feature Causality on Normal Behaviour Models for SCADA-based Wind Turbine Fault Detection (1906.12329v1)

Telmo Felgueira, Silvio Rodrigues, Christian S. Perone, Rui Castro

2019-06-28

The cost of wind energy can be reduced by using SCADA data to detect faults in wind turbine components. Normal behavior models are one of the main fault detection approaches, but there is a lack of consensus in how different input features affect the results. In this work, a new taxonomy based on the causal relations between the input features and the target is presented. Based on this taxonomy, the impact of different input feature configurations on the modelling and fault detection performance is evaluated. To this end, a framework that formulates the detection of faults as a classification problem is also presented.

PointFlow: 3D Point Cloud Generation with Continuous Normalizing Flows (1906.12320v1)

Guandao Yang, Xun Huang, Zekun Hao, Ming-Yu Liu, Serge Belongie, Bharath Hariharan

2019-06-28

As 3D point clouds become the representation of choice for multiple vision and graphics applications, the ability to synthesize or reconstruct high-resolution, high-fidelity point clouds becomes crucial. Despite the recent success of deep learning models in discriminative tasks of point clouds, generating point clouds remains challenging. This paper proposes a principled probabilistic framework to generate 3D point clouds by modeling them as a distribution of distributions. Specifically, we learn a two-level hierarchy of distributions where the first level is the distribution of shapes and the second level is the distribution of points given a shape. This formulation allows us to both sample shapes and sample an arbitrary number of points from a shape. Our generative model, named PointFlow, learns each level of the distribution with a continuous normalizing flow. The invertibility of normalizing flows enables the computation of the likelihood during training and allows us to train our model in the variational inference framework. Empirically, we demonstrate that PointFlow achieves state-of-the-art performance in point cloud generation. We additionally show that our model can faithfully reconstruct point clouds and learn useful representations in an unsupervised manner. Codes will be available at https://github.com/stevenygd/PointFlow.

Statistical Learning from Biased Training Samples (1906.12304v1)

Pierre Laforgue, Stephan Clémençon

2019-06-28

With the deluge of digitized information in the Big Data era, massive datasets are becoming increasingly available for learning predictive models. However, in many situations, the poor control of data acquisition processes may naturally jeopardize the outputs of machine-learning algorithms and selection bias issues are now the subject of much attention in the literature. It is precisely the purpose of the present article to investigate how to extend Empirical Risk Minimization (ERM), the main paradigm of statistical learning, when the training observations are generated from biased models, i.e. from distributions that are different from that of the data in the test/prediction stage. Precisely, we show how to build a "nearly debiased" training statistical population from biased samples and the related biasing functions following in the footsteps of the approach originally proposed in Vardi et al. (1985) and study, from a non asymptotic perspective, the performance of minimizers of an empirical version of the risk computed from the statistical population thus constructed. Remarkably, the learning rate achieved by this procedure is of the same order as that attained in absence of any selection bias phenomenon. Beyond these theoretical guarantees, illustrative experimental results supporting the relevance of the algorithmic approach promoted in this paper are also displayed.

Luck Matters: Understanding Training Dynamics of Deep ReLU Networks (1905.13405v4)

Yuandong Tian, Tina Jiang, Qucheng Gong, Ari Morcos

2019-05-31

We analyze the dynamics of training deep ReLU networks and their implications on generalization capability. Using a teacher-student setting, we discovered a novel relationship between the gradient received by hidden student nodes and the activations of teacher nodes for deep ReLU networks. With this relationship and the assumption of small overlapping teacher node activations, we prove that (1) student nodes whose weights are initialized to be close to teacher nodes converge to them at a faster rate, and (2) in over-parameterized regimes and 2-layer case, while a small set of lucky nodes do converge to the teacher nodes, the fan-out weights of other nodes converge to zero. This framework provides insight into multiple puzzling phenomena in deep learning like over-parameterization, implicit regularization, lottery tickets, etc. We verify our assumption by showing that the majority of BatchNorm biases of pre-trained VGG11/16 models are negative. Experiments on (1) random deep teacher networks with Gaussian inputs, (2) teacher network pre-trained on CIFAR-10 and (3) extensive ablation studies validate our multiple theoretical predictions.

RECURSIA-RRT: Recursive translatable point-set pattern discovery with removal of redundant translators (1906.12286v1)

David Meredith

2019-06-28

Two algorithms, RECURSIA and RRT, are presented, designed to increase the compression factor achieved using SIATEC-based point-set cover algorithms. RECURSIA recursively applies a TEC cover algorithm to the patterns in the TECs that it discovers. RRT attempts to remove translators from each TEC without reducing its covered set. When evaluated with COSIATEC, SIATECCompress and Forth's algorithm on the JKU Patterns Development Database, using RECURSIA with or without RRT increased compression factor and recall but reduced precision. Using RRT alone increased compression factor and reduced recall and precision, but had a smaller effect than RECURSIA.



Sort:  

Congratulations @maroonv! You have completed the following achievement on the Steem blockchain and have been rewarded with new badge(s) :

You published more than 60 posts. Your next target is to reach 70 posts.

You can view your badges on your Steem Board and compare to others on the Steem Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

To support your work, I also upvoted your post!

Vote for @Steemitboard as a witness to get one more award and increased upvotes!

Coin Marketplace

STEEM 0.27
TRX 0.13
JST 0.032
BTC 60895.62
ETH 2917.92
USDT 1.00
SBD 3.58