Dawid_Skene

class Dawid_Skene(answers, n_classes, sparse=False, **kwargs)

Dawid and skene model (1979)

Assumptions: - independent workers

Using: - EM algorithm

Estimating: - One confusion matrix for each workers

__init__(answers, n_classes, sparse=False, **kwargs)

Dawid and Skene strategy: estimate confusion matrix for each worker.

Assuming that workers are independent, the model assumes that

\[(y_i^{(j)}\ | y_i^\star = k) \sim \mathcal{M}\left(\pi^{(j)}_{k,\cdot}\right)\]

and maximizes the log likelihood of the model using an EM algorithm.

\[\begin{split}\underset{\rho,\\pi,T}{\mathrm{argmax}}\prod_{i\in [n_{\texttt{task}}]}\prod_{k \in [K]}\bigg[\rho_k\prod_{j\in [n_{\texttt{worker}}]}\prod_{\ell\in [K]}\big(\pi^{(j)}_{k, \ell}\big)^{\mathbf{1}_{\{y_i^{(j)}=\ell\}}}\bigg]^{T_{i,k}},\end{split}\]

where \(\rho\) is the class marginals, \(\pi\) is the confusion matrix and \(T\) is the indicator variables of belonging to each class.

Parameters:
  • answers (dict) –

    Dictionary of workers answers with format

    {
        task0: {worker0: label, worker1: label},
        task1: {worker1: label}
    }
    

  • sparse (bool, optional) – If the number of workers/tasks/label is large (\(>10^{6}\) for at least one), use sparse=True to run per task

  • n_classes (int, optional) – Number of possible classes, defaults to 2

get_crowd_matrix()

Transform dictionnary of labels to a tensor of size (n_task, n_workers, n_classes)

init_T()

NS initialization

m_step()

Maximizing log likelihood (see eq. 2.3 and 2.4 Dawid and Skene 1979)

Returns:

\(\rho\): \((\rho_j)_j\) probabilities that instance has true response j if drawn at random (class marginals) pi: number of times worker k records l when j is correct

e_step()

Estimate indicator variables (see eq. 2.5 Dawid and Skene 1979)

Returns:

T: New estimate for indicator variables (n_task, n_worker) denom: value used to compute likelihood easily

log_likelihood()

Compute log likelihood of the model

run(epsilon=1e-06, maxiter=50, verbose=False)

Run the EM optimization

Parameters:
  • epsilon (float, optional) – stopping criterion (\(\ell_1\) norm between two iterates of log likelihood), defaults to 1e-6

  • maxiter (int, optional) – Maximum number of steps, defaults to 50

  • verbose (bool, optional) – Verbosity level, defaults to False

Returns:

Log likelihood values and number of steps taken

Return type:

(list,int)

get_answers()

Get most probable labels

get_probas()

Get soft labels distribution for each task

run_sparse(epsilon=1e-06, maxiter=50, verbose=False)