Recebi 4 emails com propostas para representação de Professores Associados no Conselho Universitário da USP. Rápida avaliação segue.
Bruno Caramelli: mensagem inteira sobre a administração do Hospital Universitário. Não é a única questão, nem de longe a mais importante para a universidade. Descartada.
Adrián Pablo Fanjul: palavras de ordem e slogans ideológicos. Sem conteúdo. Descartada.
Marcílio Alves: candidato mostra experiência e afinidade com os trabalhos administrativos. Proposta com redação deficiente - além de erros de pontuação e capitalização, está redigida no estilo Powerpoint, burocrático e pouco informativo. A considerar.
Ana Estela Haddad.
Mensagem escrita em português claro e direto, apresentando o compromisso de ouvir e representar os professores, sem promessas mirabolantes. Terá meu voto.
16 fevereiro 2018
02 fevereiro 2018
Derivative-free optimization
I uploaded the preprint "The Barycenter Method for Direct Optimization" onto arXiv. One thing I didn't mention is Bayesian optimization, a form of direct optimization which tries to learn about the shape of the function being optimized using statistical priors.
No mentioned is "Grad student descent", a term I have just learned, which seems to be due to @ryan_p_adams, although the method itself is as ancient as walking forwards. It may be better than my own barycenter method if grad student time is cheap enough.
Back to being serious, I am pretty sure that barycenter is better than Bayesian optimization, however hard working and poorly paid the grad students may be. The main difference from my point of view is that Bayesian optimization is trying to learn something about the complete function in the process of finding the minimum. That is bound to be more expensive and slow, plus it's sort of scary for a controls person - if you're using your candidate optimal to make decisions in real time, you want to be really careful exploring alternatives for the sake of knowledge.
The Bayesian optimization literature has ways to deal with such issues, but still - my guess is that in many applications the barycenter method is leaner, and more transparent from the point of view of implementing and understanding what goes one. Would welcome some testing - the necessary extensive simulations are something I don't have the grad student power to do.
No mentioned is "Grad student descent", a term I have just learned, which seems to be due to @ryan_p_adams, although the method itself is as ancient as walking forwards. It may be better than my own barycenter method if grad student time is cheap enough.
Back to being serious, I am pretty sure that barycenter is better than Bayesian optimization, however hard working and poorly paid the grad students may be. The main difference from my point of view is that Bayesian optimization is trying to learn something about the complete function in the process of finding the minimum. That is bound to be more expensive and slow, plus it's sort of scary for a controls person - if you're using your candidate optimal to make decisions in real time, you want to be really careful exploring alternatives for the sake of knowledge.
The Bayesian optimization literature has ways to deal with such issues, but still - my guess is that in many applications the barycenter method is leaner, and more transparent from the point of view of implementing and understanding what goes one. Would welcome some testing - the necessary extensive simulations are something I don't have the grad student power to do.
Assinar:
Postagens (Atom)