I do also wonder whether more focus on the looking-for-a-counterexample side of things would be sensible. However, it seems very hard to even find a promising direction on that.

But let’s get the ball rolling – here is an attempted counterexample to FUNC.

Let the groundset be , all of equal size. For each , let be a subset of , taken randomly of size . We generate our union-closed family with all sets that contain some , and avoid the corresponding .

These generators are constrained on elements of . Any union of two or more of these generators is constrained on about elements of . So, picking small and then large, almost all the sets in our family are generators. Also, the generators have average size less than – because they exclude slightly more elements than they include.

So, the average size of our sets is small, and the scheme doesn’t favour any particular element of . That’s promising. The problem is that the don’t cover anything like twice – so some (in fact most) elements of are only in a single . It looks very much like those elements are going to be the abundant ones.

]]>For another potential improvement, we could also try to choose the minimal transversal in some clever way, but I don’t have any ideas along these lines.

]]>As in Knill’s proof, consider a minimal transversal . As explained in the previous comment, there is an associated weight function , and we want to find such that the total weight of the -containing sets is a high fraction of the overall weight . Knill’s original argument is like this: if every had a total weight of less than , then it wouldn’t be possible to cover all of , which has a total weight of , where the negative term is . Since , it follows that there is some of total weight at least .

This argument is suboptimal: the worst-case scenario that it assumes is that the weight is completely concentrated on the singletons. This doesn’t happen in the case at hand, since every nonempty has a weight . This is good because every that is not a singleton will contribute its weight to more than one element of .

So how can we improve the argument? What I’ve tried so far is just to use that everywhere. Using a worst-case analysis analogous to the above, I’ve found that this implies that some must have a total weight of at least

where is somewhere between and . It’s encouraging that at these two extreme values, the resulting lower bound is precisely . However, my numerics indicate that for fixed , there is a unique minimum of this bound as a function of , and the value of this minimum seems asymptotically at most a constant factor away from Knill’s bound . So this is not a really big improvement yet.

Is there more that we can say about weight functions arising from quotienting along for minimal transversals $S$?

]]>The idea is this. Let be a set system and let be its ground set. Define a random walk on as follows. If you are at , then pick a random set that contains and move to a random element of . Then the distribution associated with is the stationary distribution of this random walk.

Let be a point in , and let us duplicate it.

OK I’m completely wrong, and it’s obvious that duplication has a massive effect. (For example, if you duplicate a billion times, then you are hugely more likely to end up at a copy of whenever you choose some set that contains in the original system.) In the Polymath spirit I will post this non-thought anyway, in case the idea can be rescued somehow.

]]>I don’t know what I think about this. It also makes me think of something Gil Kalai once wrote, possibly in a blog comment — I’d be very pleased to have an exact reference — which was that there is no difference between looking for a proof and looking for a counterexample. I agree strongly with the point he is making, namely that whether or not you believe in a statement, in order to find out whether it is true you basically need (or very often need) to do the same sort of iterative process of trying to prove it, getting stuck, trying to turn that stuckness into a counterexample, getting stuck, trying to turn that stuckness into a proof, and so on until if you are lucky the process converges.

In another sense there obviously *is* a difference, in that there are two distinct parts to that iterative process — the looking-for-proof parts and the looking-for-counterexample parts. So the question remains of whether we should try to come up with proposals for methods of creating counterexamples.

One vague thought I have had, and I don’t think I’m alone in this, is that it might be possible to do something like creating an example where only a few elements have abundance greater than 1/2 (which we know we can do), and then somehow creating out of that example a new example where we reduce the abundances of those few elements, probably adding some new elements in the process. The difficulty with such approaches seems to be that in reducing the abundance of some elements, the constructions one can think of seem to introduce new elements with large abundances.

]]>In the first formulation, the family would be:

,

where is the Renaud-Sarvate family, and is the special 3-set in that family with no abundant element.

]]>Anyway, I’m going to rephrase my example as a conjecture using your language to remove the rather unnecessary reference to , and also add a weaker version I still can’t find a counterexample to.

Conjecture 1: If and are union-closed, with and , then there is an element which is abundant in both and .

Conjecture 2: Under the same conditions, every abundant element of is abundant in .

]]>Suppose we do as you say and when we duplicate to become and we split up all the implications in the natural way and also add the implications and . Then this changes the stationary distribution, because effectively it is adding in a non-zero probability of staying put at in the random walk.

The reason I thought of this is the following example. Let be the ground set, where is a copy of . Then let consist of all sets where , is the copy of in and for some (fixed in advance) bijection . This is the previous example but with elements of duplicated. Also add in the empty set.

But now the Horn clauses will be , , , , and . So for the random walk, points in are equally likely to go to or , points in are equally likely to go to or , and points in are equally likely to go to or .

The number of sets is still approximately , since all we’ve done is duplicate . So the abundances in and are about 1/3 and the abundances in are about 2/3, which means that the average abundance for the stationary distribution is about .

This example feels like a slight cheat, but I think it may be possible to modify it to produce one that doesn’t rely on duplication to the same extent.

]]>Say that if . That is, if and , then . Then if and both set systems are union closed, there exists an element that has abundance at least 1/2 in both systems.

The connection (just to spell it out) is that if is any union-closed system and belongs to its ground set, then .

The fact that you ask for condition 1 suggests that you may already know that this is false.

]]>1) is not abundant in

2) No element is abundant in both and .

3) Both and contain a non-empty set (just to exclude some silly examples).

Moore families” by Labernia and Raynaud might be relevant / interesting. I haven’t had the time to read it in detail yet. ]]>