About me




Alexander Vezhnevets

Alexander Vezhnevets

Since February 2015 I'm a research scientist at Google DeepMind. Prior to that I was a research associate at the University of Edinburgh where I worked with Vittorio Ferrari in Calvin research group. I received my PhD from ETH Zurich under supervision of Prof. Joachim M. Buhmann.


My profile on Google Scholar with citation statistics can be found here.

Major peer reviewed conferences and journals

Associative embeddings for large-scale knowledge transfer with self-assessment
A. Vezhnevets, V. Ferrari
CVPR 2014

Weakly Supervised Structured Output Learning for Semantic Segmentation
A. Vezhnevets, V. Ferrari, J.M. Buhmann
CVPR 2012, Oral

Active Learning for Semantic Segmentation with Expected Change
A. Vezhnevets, J.M. Buhmann, V. Ferrari
CVPR 2012

Anisotropic ssTEM Image Segmentation Using Dense Correspondence Across Sections
D.Laptev, A. Vezhnevets, S. Dwivedi, J.M. Buhmann
MICCAI 2012, to appear.

Weakly Supervised Semantic Segmentation with a Multi-image Model
A. Vezhnevets, V. Ferrari, J.M. Buhmann
ICCV 2011
[PDF] [Code]

Agnostic Domain Adaptation
A. Vezhnevets, J.M. Buhmann
DAGM 2011, Oral

Towards Weakly Supervised Semantic Segmentation by Means of Multiple Instance and Multitask Learning
A. Vezhnevets, J.M. Buhmann
CVPR 2010

Avoiding Boosting Overfitting by Removing 'Confusing Samples'
A. Vezhnevets, O. Barinova
ECML 2007, Oral

'Modest AdaBoost' - Teaching AdaBoost to Generalize Better
A. Vezhnevets, V. Vezhnevets
Graphicon 2005

Other publications and publications in Russian

V. V. Gavrishchaka, O. V. Barinova, A. P. Vezhnevets & M. A. Monina "Discovery of multi-component portfolio strategies with continuous tuning to the changing market micro-regimes using input-dependent boosting". Computational Finance and its Applications III, 2008.

A Sobolev, A. Vezhnevets, V. Vezhnevets "Probabilistic output for multiclass learning based on error correcting output codes" 13th Conference on Mathematical Methods of Pattern Recognition 2007

A. Vezhnevets, A Sobolev, V. Vezhnevets "Calibrating one-versus-all multiclass Boosting" 13th Conference on Mathematical Methods of Pattern Recognition 2007

O. Barinova, A. Kuzmishkina, A. Vezhnevets and V. Vezhnevets "Learning class specific edges for vanishing point estimation" Graphicon 2007. pdf

A. Vezhnevets, V. Vezhnevets "Learnable Swendsen-Wang Cuts for Image Segmentation". Graphicon 2007. pdf

O. Barinova, A. Vezhnevets, V. Vezhnevets "Increasing Boosting generalization for classification tasks with overlapping classes" 13th Conference on Mathematical Methods of Pattern Recognition 2007

A. Vezhnevets "Chinese room argument from machine learning perspective" Philosophy of Mathematics: current problems. Moscow, Russia, 2007. pp. 377-379

A. Vezhnevets "Machine learning methods for the task of visual recognition" Graphicon-2006, Novosibirsk Akademgorodok, Russia, 2006. pp. 166-173.

V. Vezhnevets, R. Shorgin, A. Vezhnevets "System for mouse control via users head movement" Graphicon-2006, Novosibirsk Akademgorodok, Russia, 2006. pp. 166-173.


GML AdaBoost Toolbox

This is a very old toolbox of mine, that I have developed during my undergrad in MSU Graphics and Media Lab. After discovering that it is still used (and referred to by Wikipedia), I have fixed couple of very old bugs, and here it is:

AdaBoost Toolbox v0.4

It implements couple of popular boosting methods like Real, Gentle and Modest AdaBoost with decision trees weak learners. There is also a class managing cross validation and option to use classifiers trained in matlab in C++ (all classes provided). See manual and examples for details.

The old project page can be found here.

Frankly, I cannot offer much support to those of you who need help with the toolbox, but you may try contacting me over email.

Multi-Image Model for Semantic Segmentation

This is an implementation of ICCV 2011 paper "Weakly Supervised Semantic Segmentation with a Multi-image Model". It is quite raw and has certain limitations described in a README.txt file:

MIM toolbox v0.2

We also provide precalculated features for MSRC 21 dataset along with nessesary data structures:

MSRC features

To start, download both (code and data) and run MainScript.m into the same folder. If you wish to apply it to your own data you will have to reimplement feature extraction and oversegmentation and format your data accordingly. I plan to release an updated version soon.

Active Alpha Expansion

This is a small toolbox that implements fast alpha expansion with recycling. In words if you have to solve a lot of multi-label energy minimization problems with only difference between them nested in unary potentials - you can do it very fast with this tool. We used this code for our "Active Learning for Semantic Segmentation with Expected Change" CVPR 2012 paper. Please cite it if you use the toolbox:

Active Alpha Expansion v0.1

Contact information:

University of Edinburgh
10 Crichton Street, G14
Edinburgh, EH8 9AB
Scotland, UK

E-Mail: avezhnev@staffmail.ed.ac.uk