Reasoning with priorities is the kind of reasoning in which we rely on degrees of importance, urgency, plausibility, specificity, recency, etc. to solve certain problems with our obligations, beliefs, etc. In formal logic and AI, a variety of models has been put forward for these kinds of reasoning. The aim of my project is twofold: (i) I want to further study the way we reason with priorities, using adaptive logics as my frame of reference, and (ii) I want to develop formal models for the way people obtain priorities. Especially the second topic has received fairly little attention up to now.