guix-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Guidelines for pre-trained ML model weight binaries (Was re: Where s


From: Vagrant Cascadian
Subject: Re: Guidelines for pre-trained ML model weight binaries (Was re: Where should we put machine learning model parameters?)
Date: Tue, 04 Jul 2023 13:03:13 -0700

On 2023-07-04, zamfofex wrote:
>> On 07/03/2023 6:39 AM -03 Simon Tournier <zimon.toutoune@gmail.com> wrote:
>> 
>> Well, I do not see any difference between pre-trained weights and icons
>> or sound or good fitted-parameters (e.g., the package
>> python-scikit-learn has a lot ;-)).  As I said elsewhere, I do not see
>> the difference between pre-trained neural network weights and genomic
>> references (e.g., the package r-bsgenome-hsapiens-1000genomes-hs37d5).
>
> I feel like, although this might (arguably) not be the case for
> leela-zero nor Lc0 specifically, for certain machine learning
> projects, a pretrained network can affect the program’s behavior so
> deeply that it might be considered a program itself! Such networks
> usually approximate an arbitrary function. The more complex the model
> is, the more complex the behavior of this function can be, and thus
> the closer to being an arbitrary program it is.
>
> But this “program” has no source code, it is effectively created in
> this binary form that is difficult to analyse.
>
> In any case, I feel like the issue Ludovic was talking about “user
> autonomy” is fairly relevant (as I understand it). For icons, images,
> and other similar kinds of assets, it is easy enough for the user to
> replace them, or create their own if they want. But for pretrained
> networks, even if they are under a free license, the user might not be
> able to easily create their own network that suits their purposes.
>
> For example, for an image recognition software, there might be data
> provided by the maintainers of the program that is able to recognise a
> specific set of objects in input images, but the user might want to
> use it to recognise a different kind of object. If it is too costly
> for the user to train a new network for their purposes (in terms of
> hardware and time required), the user is effectively entirely bound by
> the decisions of the maintainers of the software, and they can’t
> change it to suit their purposes.

For a more concrete example, with facial reconition in particular, many
models are quite good at recognition of faces of people of predominantly
white european descent, and not very good with people of other
backgrounds, in particular with darker skin. The models frequently
reflect the blatant and subtle biases of the society in which they are
created, and the creators who develop the models. This can have
disasterous consequences when using these models without that
understanding... (or even if you do understand the general biases!)

This seems like a significant issue for user freedom; with source code,
you can at least in theory examine the biases of the software you are
using.


live well,
  vagrant

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]