We study the evolution of conventions in a ``language'' game -- two groups of
agents assign different positive payoffs for coordinating on different
actions but zero to miscoordination -- under the assumption that agents can pay a cost to learn the type (group) of their opponent. If they pay it, they can play separate actions with different types. We distinguish the analysis in two cases: in the first, the cost is equal to zero and, in the second, it is strictly positive. We look for long-run conventions using stochastic stability analysis. When the cost is zero or sufficiently low, agents always coordinate on their favorite action with their type, but their behavior in mixed interactions depends on their preferences. The favorite action by the type who is more rigid in preferences is the one played by the whole population in equilibrium. When the cost is high enough, two scenarios can happen. When one type is enough more rigid in preferences than the other, every agent plays the action preferred by that type with anyone. When they are both rigid in preferences, a heterogeneous strategy profile that causes miscoordination is the long-run equilibrium: in such a state, the two types play their favorite actions.