In the last post I concentrated on examples, so in this one I’ll concentrate on conjectures related to FUNC, though I may say a little about examples at the end, since a discussion has recently started about how we might go about trying to find a counterexample to FUNC.

### A proposal for a rather complicated averaging argument

After the failure of the average-overlap-density conjecture, I came up with a more refined conjecture along similar lines that has one or two nice properties and has not yet been shown to be false.

The basic aim is the same: to take a union-closed family and use it to construct a probability measure on the ground set in such a way that the average abundance with respect to that measure is at least 1/2. With the failed conjecture the method was very basic: pick a random non-empty set and then a random element .

The trouble with picking random elements is that it gives rise to a distribution that does not behave well when you duplicate elements. (What you would want is that the probability is shared out amongst the duplicates, but in actual fact if you duplicate an element lots of times it gives an advantage to the set of duplicates that the original element did not have.) This is not just an aesthetic concern: it was at the heart of the downfall of the conjecture. What one really wants, and this is a point that Tobias Fritz has been emphasizing, is to avoid talking about the ground set altogether, something one can do by formulating the conjecture in terms of lattices, though I’m not sure what I’m about to describe does make sense for lattices.

Let be a union-closed set system with ground set . Define a *chain* to be a collection of subsets of with the following properties.

- The inclusions are strict.
- Each is an intersection of sets in .
- is non-empty, but for every , either or .

The idea is to choose a random chain and then a random element of . That last step is harmless because the elements of are indistinguishable from the point of view of (they are all contained in the same sets). So this construction behaves itself when you duplicate elements.

What exactly is a random chain? What I suggested before was to run an algorithm like this. You start with . Having got to , let consist of all sets such that is neither empty nor , pick a random set , and let . But that is not the only possibility. Another would be to define a chain to be *maximal* if for every there is no set such that lies strictly between and , and then to pick a maximal chain uniformly at random.

At the moment I think that the first idea is more natural and therefore more likely to work. (But “more likely” does not imply “likely”.) The fact that it seems hard to disprove is not a good reason for optimism, since the definition is sufficiently complicated that it is hard to analyse. Perhaps there is a simple example for which the conjecture fails by miles, but for which it is very hard to prove that it fails by miles (other than by checking it on a computer if the example is small enough).

Another possible idea is this. Start a random walk at . The walk takes place on the set of subsets of that are non-empty intersections of sets in . Call this set system . Then join to in if is a proper subset of and there is no that lies properly between and . To be clear, I’m defining an *un*directed graph here, so if is joined to , then is joined to .

Now we do a random walk on this graph by picking a random neighbour at each stage, and we take its stationary distribution. One could then condition this distribution on the set you are at being a minimal element of . This gives a distribution on the minimal elements, and then the claim would be that on average a minimal element is contained in at least half the sets in .

I’ll finish this section with the obvious question.

**Question.** *Does an averaging argument with a probability distribution like one of these have the slightest chance of working? If so, how would one go about proving it?*

### Describing union-closed families using Horn clauses

Tobias Fritz has shared with us a very nice observation that gives another way of looking at union-closed families. It is sufficiently natural that I feel there is a good chance that it will be genuinely helpful, and not just a slightly different perspective on all the same statements.

Let be a finite set, let and let be a non-empty subset of . Write as shorthand for the condition

.

If , then we can write this as a *Horn clause*

.

If is a collection of conditions of this kind, then we can define a set system to consist of all sets that satisfy all of them. That is, for each , if , then .

It is very easy to check that any set system defined this way is union closed and contains the empty set. Conversely, given a union-closed family that includes the empty set, let be a subset of that does not belong to . If for every we can find such that , then we have a contradiction, since the union of these belongs to and is equal to . So there must be some such that for every , if , then . That is, there is a condition that is satisfied by every and is not satisfied by . Taking all such conditions, we have a collection of conditions that gives rise to precisely the set system .

As Thomas says, this is strongly reminiscent of describing a convex body not as a set of points but as an intersection of half spaces. Since that dual approach is often extremely useful, it seems very much worth bearing in mind when thinking about FUNC. At the very least, it gives us a concise way of describing some union-closed families that would be complicated to define in a more element-listing way: Tobias used it to describe one of Thomas Bloom’s examples quite concisely, for instance.

#### Generalizing the idea

Suppose we have a Horn-clause description of a union-closed family . For each , it gives us a collection of conditions that must satisfy, each of the form . Putting all these together gives us a single condition in conjunctive normal form. This single condition is a monotone property of , and any monotone property can arise in this way. So if we want, we can forget about Horn clauses and instead think of an arbitrary union-closed family as being defined as follows. For each , there is some monotone property , and then consists of all sets such that for every , the property holds.

To illustrate this with an example (not one that has any chance of being a counterexample to FUNC — just an example of the kind of thing one can do), we could take (the integers mod a prime ) and take to be the property “contains a subset of the form “. Note that this is a very concise definition, but the resulting criterion for a set to belong to is not simple at all. (If you think it is, then can you exhibit for me a non-empty set of density less than 1/2 that satisfies the condition when , or prove that no such set exists? *Update: I’ve now realized that this question has a fairly easy answer — given in a comment below. But describing the sets that satisfy the condition would not be simple.*)

#### Natural questions that arise

This way of looking at union-closed families also generates many special cases of FUNC that could be interesting to tackle. For example, we can take the ground set to be some structure (above, I took a cyclic group, but one could also take, for instance, the complete graph on a set of vertices) and restrict attention to properties that are natural within that structure (where “natural” could mean something like invariant under symmetries of the structure that fix ).

Another special case that is very natural to think about is where each property is a single disjunction — that is, the Horn-clause formulation in the special case where each is on the left of exactly one Horn clause. Is FUNC true in this case? Or might this case be a good place to search for a counterexample? At the time of writing, I have no intuition at all about this question, so even heuristic arguments would be interesting.

### A question of Gil Kalai

As discussed in the last post, we already know that an optimistic conjecture of Tobias Fritz, that there is always some and a union-preserving injection from to , is false. Gil Kalai proposed a conjecture in a similar spirit: that there is always an injection from to such that each set in is a subset of its image. So far, nobody (or at least nobody here) has disproved this. I tried to check whether the counterexamples to Tobias’s conjecture worked here too, and I’m fairly sure the complement-of-Steiner-system approach doesn’t work.

While the general belief seems to be (at least if we believe Jeff Kahn) that such strengthenings are false, it would be very good to confirm this. Of course it would be even better to prove the strengthening …

*Update: Alec Edgington has now found a counterexample.*

### A question of Tom Eccles

In this comment Tom Eccles asked a question motivated by thinking about what an inductive proof of FUNC could possibly look like. The question ought to be simpler than FUNC, and asks the following. Does there exist a union-closed family and an element with the following three properties?

- has abundance less than 1/2.
- No element has abundance greater than or equal to 1/2 in both and .
- Both and contain at least one non-empty set.

It would be very nice to have such an example, because it would make an excellent test case for proposed inductive approaches.

There’s probably plenty more I could extract from the comment thread in the last post, but I think it’s time to post this, since the number of comments has exceeded 100.

While I’m saying that, let me add a general remark that if anyone thinks that a direction of discussion is being wrongly neglected, then please feel free to highlight it, even (or perhaps especially) if it is a direction that you yourself introduced. These posts are based on what happens to have caught my attention, but should not be interpreted as a careful judgment of what is interesting and what is not. I hope that everything I include is interesting, but the converse is completely false.

February 13, 2016 at 4:57 pm |

There’s a question that’s been bugging me for some time, which I haven’t asked because I haven’t managed to make it even slightly precise. But I think maybe now I can (make it slightly precise, that is, not fully so).

Given a union-closed set system , I would very much like to construct a probability distribution on its ground set in a way that respects duplication. One way of expressing this requirement is as follows. Suppose is a finite collection of subsets of a

not necessarily finiteset . Is there a way of constructing a probability measure on the algebra generated by ? The advantage of phrasing the question this way is that if all we know about is that it is some set, then there isn’t some pre-existing measure on to distract us.Of course, we can build an isomorphic set system by taking the atoms of as its ground set and replacing each by the set of atoms it contains. But it is far from clear that it is natural to put the uniform distribution on the atoms.

Here’s an example of a set system that separates the points of the ground set, where the uniform measure on the ground set is clearly a silly one to take.

Let , where and is much smaller than , which in turn is much smaller than . Let consist of all subsets of together with all sets of the form with , and of course the set itself.

Each belongs to of these sets, so has abundance slightly greater than 1/2. Each belongs to of the sets, so has abundance close to zero. Therefore, the average abundance is almost zero, but the set system separates points.

It feels to me as though what has gone wrong here is that in some sense the set system separates the points in far more than it separates the points in . So we need to replace the qualitative notion of being separated by a much more quantitative one. For example, for each pair we could ask not just whether there

existsa set in that contains and not , butwhat proportionof the sets in have that property. With the set system above, we find that if both and belong to , then this proportion is approximately 1/4, whereas if belongs to and belongs to then it is approximately 1/2, and if belongs to then it is close to zero. This makes it very tempting to try to associate these probabilities with transitions in a random walk that would in this case have a stationary distribution strongly concentrated in . But I don’t immediately see a good definition that does this. (Recall that a good definition would need to work even if we duplicated all the points in and infinitely many times.)February 13, 2016 at 5:14 pm |

A different vague thought I wanted to note down concerns the search for a good weighted version of FUNC. As Ben Barber has pointed out, it is hard to come up with a satisfactory weighted version, and the attempts mentioned in FUNC1 are unsatisfactory.

One way of summarizing the difficulty is as follows. To turn a set system into a function, the obvious method is to take its characteristic function, which is a function from the power set of to . A weighted version is then most naturally thought of as a function from the power set of to a more general set such as or . But then even saying what a union is is difficult. It looks as though we want a pointwise binary operation such that , , which we then apply pointwise.

A natural candidate for this is the pointwise maximum, but then an immediate problem arises. We would expect all the things we are counting to depend continuously on the function. So how do we count abundances? To see the problem, consider the set system , or in function terms and . The number of distinct pointwise maxima is 2, so 1 has abundance 1 and 2 has abundance 1/2. Now replace this by the functions and . Now there are three distinct pointwise maxima, and the abundances suddenly change. But what we clearly should be doing is somehow reflecting the fact that is very close to .

The answer, it seems, should be to replace counting of sets by something more to do with metric entropy — that is, the sizes of various nets of the set of all pointwise maxima. I haven’t thought about what one might do in any detail, so I’ll end this comment here.

February 14, 2016 at 8:55 am

Much of the above comment is incoherent because I was muddling up generalizing 01 functions on the power set with generalizing 01 functions on the ground set. I’ll try to post an improved version of the comment later.

February 13, 2016 at 8:10 pm |

I found a symmetric 17-set family over a 5-element ground set that answers Gil’s proposed conjecture in the negative: that is, it admits no injections such that for all .

Let the ground set be . The family consists of all sets of the form where . (I interpret as defining the empty set).

It’s easy to see that this is union-closed. To verify that it’s a counterexample, by symmetry we need only consider one value, say . Suppose we have an injection of the required form. Then the following facts are forced: (1) ; (2) ; (3) . Finally, there is nowhere for to go.

February 13, 2016 at 10:39 pm

This nice example prompts a simple question that might be interesting. For each let be the collection of all unions of intervals of length in . Equivalently, it is the collection of all sets of the form . Must every element of have abundance at least 1/2 in such a system?

This example has one thing going for it that not all examples have, which is that a random set of large density will typically

notbelong to the system.February 13, 2016 at 10:56 pm

We can describe a set in up to cyclic permutations as follows. Take a sequence such that for each , and let be the set of such that for some .

The question we want to answer is: is the average size of set built like this greater than or less than ? One might think it should be greater than because when you take one of the intervals in it is forced to have length at least (and in particular to be long) while when you take one of the intervals not in it is not restricted (and so is allowed to be shorter). Unfortunately, I think this is likely to be correct. But if it is correct, proving it to be correct might be well worth trying to do rigorously. Perhaps there is even a rather neat argument that does it.

February 14, 2016 at 1:58 pm

That’s an interesting question. I had a go at working out the answer exactly but just got a complicated sum that I couldn’t make headway with.

A different generalization (almost) of the above construction would be: start with some graph; take its vertex set as your ground set; and form the family consisting of all connected sets of vertices with cardinality at least half that of the original graph. By choosing suitable graphs we might obtain some interesting examples.

February 14, 2016 at 5:11 pm

The way I convinced myself these family satisfy FUNC, without rigorous proof: if we take a set in our family, we get a partition of n into an even number of parts corresponding to the lengths of the intervals, and the gaps between them. For any such partition, look at the sets of our family corresponding to it. The constraint on these sets is that all the pieces of the partition of length less than r are the lengths of non-intervals; it’s pretty clear [insert proof here] that this makes the non-intervals smaller than the intervals on average.

It’s worth noting that this is union closed family of the more general form “all the subgraphs of a graph G such that every edge is in a subgraph isomorphic to H”, which we thought about briefly in FUNC2 – here, G is a cycle of length k, and H is a path of length r.

February 14, 2016 at 8:33 pm

This is very nice.

I suppose that the Z_n-invariant version of the conjecture is already interesting. We want to show that given a family of subsets of Z_n closed under rotations and unions then the average size is at least n/2. (right?)

A different very drastic special case of FUNC is as follows: Given a family of graphs on a set of n vertices, closed under 1) isomorphism, 2) union, then the average number of edges is at least . Maybe this is not hard?

Actually, maybe for union-closed families invariant under a transitive group of permutations on the ground set in general, and for the two special classes above an even stronger statement than FUNC is correct?

February 15, 2016 at 3:51 am

Since we now that we cannot always have can we at least always have a bijection with ? Even weaker, do we always have an which belong to at least half the set and that

.

February 15, 2016 at 9:43 am

Suppose that there is no size-increasing injection from to . Define a bipartite graph by joining each to all that have larger cardinality. By Hall’s condition, there must exist a subset of with fewer than neighbours in this graph. Since adding a set to that is not smaller than all the other sets in does not affect the set of neighbours, we may assume that consists of all sets in of at least a given size .

So an equivalent formulation of your new question is this. Does there exist some such that for every the number of sets of size at least in is at least the number of sets of size at least in ?

Another weakening of the question is to reverse the order of quantification. That is, can one show that for every there exists such that the number of sets of size at least that contain is at least the number of sets of size at least that do not contain ?

Of course, if we set we just get FUNC, but the idea would be either to find a counterexample or to find a proof that uses FUNC.

February 15, 2016 at 5:01 pm

Alec’s example can be described as

for . Which implies

and by taking complements

. The intersection on the left has as non–trivial element. Since is union–closed we can hope that many of its members contain as subset and indeed there are 4 such sets, namely

.

The set system only contains 3 such sets

.

Therefore, there is no injective map with .

February 16, 2016 at 9:02 am

Tim and Tom’s argument for FUNC when the family is ‘all cyclic unions of an interval in ’ is pretty convincing. I wonder if we can tackle the case of ‘all cyclic unions of ’ for an arbitrary (i.e. the smallest rotationally-invariant union-closed family that contains ). Even the case where is a union of two intervals would be a start; this might be amenable to a similar (slightly more complex) argument.

February 16, 2016 at 8:03 pm

The weaker bijection version of FUNC is equivalent to a stronger numerical version of FUNC (I am not sure if we considered it before).

Given a two sequences and we say that the first sequence majorizes the second if for every .

Now, for an element we can write the number of sets in the family containing , and the number of -sets in the family not containing . Stronger FUNC asserts that for some the -sequence majorizes the sequence.

Maybe such a stronger statement is more amenable for some inductive argument and more likely maybe it is more amenable to a counterexample.

February 17, 2016 at 4:37 am

Maybe we can increase one by one and show that among the set of ‘s that satisfies the inequalities for nonempty subset will satisfy the next inequality. for it is always ok, and I think I convinced myself its ok for $k=2$. ( Also maybe it will be a little more convenient to move to the complements and talk about intersection closed sets.)

February 13, 2016 at 10:07 pm |

In the post I gave as a question (not a serious helping-with-FUNC question) whether there is a subset of of density less than 1/2 such that for every there is an arithmetic progression mod of length 10 with common difference . But this is easy: a random set of positive density will contain arithmetic progressions of length 10 with every common difference.

I think the question becomes interesting if instead one asks for an arithmetic progression of length . It has a Kakeya-ish flavour, since one way of answering it would be to find a set of small density that contains an arithmetic progression of length for every common difference. Does existing knowledge about the Kakeya problem give such an example?

February 14, 2016 at 5:28 pm

Thinking of the graphs of the arithmetic progressions as lines in , it is possible to show that , where is the length of the progressions and is small, and .

This uses sum-product ideas: we have incidences in the point set . We have about collinear triples, and we can use this to show that there are about solutions to with in and in . This gives us about many solutions to , and a Theorem of Bourgain shows that the number of such solutions is , assuming that is sufficiently large and ; if is small, we can use more recent sum-product results.

This method only uses the fact that there are progressions of length contained in , so in fact could be , the number of progressions of length contained in . For this reason, the exponent of is the best this method can offer. If the Szemeredi-Trotter incidence bound were true over , then we could take .

There may be a better way of doing this, but the above estimates are “off the shelf”. In particular, we do not exploit that the -coordinates of our point set form an interval.

February 14, 2016 at 4:18 am |

I’m interested in extending the finite results on Frankl’s conjecture, does anyone know where I can find the paper proving that if the size of the ground set is 12, then the set is Frankl’s?

February 14, 2016 at 7:26 am

Actually, I think only 11 is known. The paper is “The 11-element case of Frankl’s conjecture” by Bosnjak and Markovic.

February 14, 2016 at 12:57 pm

Theorem 17 in the survey seems to say that 12 is known, and a result of Zivkovic and Vuckovic, or am I misunderstanding something? It just says “preprint” in the list of references, though, so perhaps it never appeared.

February 14, 2016 at 1:14 pm

You’re quite right – Theorem 17 says 12 is known, which I’d missed. As far as I can see, the result never appeared.

February 14, 2016 at 11:04 pm

yes, papers about FUNC seem very hard to track down. I’ll try to recreate the proof myself

February 16, 2016 at 9:46 am

I’ve been working on finding configurations of sets that imply Frankl’s conjecture, and I found some interesting results about the weight function. When the configuration implies FUNC, there is usually a range of possible weights that can work, but when it doesn’t, there is usually a unique weight function that gives the best bound for the minimal abundance. For example, for the configuration {1, 2, 3}, {1, 2, 4}, choosing w(1)=w(2)=31, w(3)=w(4)=24, the minimal abundance is 27/55, which is best possible. This was found by equating the weights of the extreme cases. I’m working on finding the best weight function for other configurations, but unfortunately, there are more variables to consider.

February 14, 2016 at 9:53 am |

For amusement, and perhaps more, here are some further generalizations of FUNC.

First, here’s a “ternary” version. Let be a collection of functions , where is a finite set. Suppose that is closed under taking pointwise maxima and that the pointwise maximum of all the functions is the constant function taking the value 2 everywhere. Must there be some such that for at least functions ?

While wondering about the slightly ad hoc condition about the pointwise maximum being the constant function 2, I came up with the following generalizations, which are not too carefully considered, so might be false for an easy reason. Indeed, the second one could be complete nonsense.

Let be a collection of functions from to that is closed under taking pointwise maxima. Let be the pointwise maximum of all the . Must there exist such that the proportion of such that is at least ?

One can rephrase the question in a suggestive way as follows. Given any we can consider the projected function system , the set of all functions such that is the restriction to of some . Does there exist some such that the abundance of in is at least the abundance of in ?

The reason I call that suggestive is that it raises the possibility of proving something by looking at projections to sets that are smaller than and larger than singletons.

February 14, 2016 at 10:04 am

One small aspect of the second generalization that I like if it turns out not to be rubbish is that it gives a reformulation of Frankl’s conjecture itself in a way that doesn’t require one to say that contains at least one non-empty set.

February 14, 2016 at 10:45 am

Hmm, for the ternary version, what does FUNC tell us about this if we use the old trick of duplicating points to and throwing in all implications in order to emulate ternay functions by binary ones? And what if one applies to the same trick to the -version?

About proving something by looking at projections to sets that are smaller than X and larger than singletons, that’s what happens also in the proof of Knill’s unconditional lower bound .

February 14, 2016 at 12:37 pm

OK, to spell it out, you’re saying that if we have a ternary function system on a ground set , we can replace each by two points , and then for each function we take a set such that for each , if , then , if , then , and if , then .

I’m not quite sure what we learn from FUNC here. A simple-minded argument will tell us that at least one has abundance at least 1/2, but we would be wanting to use the extra hypothesis that every belongs to at least one of the sets (and that ) to get that some has abundance at least 1/3. But maybe there is a smarter argument that gives more.

February 16, 2016 at 2:34 pm

I don’t know if I’m missing something, but isn’t the set of functions {0, 0}, {0, 1}, {1, 0}, {1, 1}, {2, 2} (the first element is f(1), the second is f(2)) a counterexample to your conjecture?

February 16, 2016 at 2:45 pm

Indeed it is. Thanks for that. I’ll see whether I can come up with a better candidate for a ternary generalization.

In fact, here’s one right now. It suffers from being less obviously natural, but maybe it’s OK.

Define an operation on by

.

Extend that to an operation on functions/sequences by applying it pointwise.

Now let be a set of functions that contains at least one non-zero function and is closed under . Must there be some such that for at least of the functions in ?

If true, this would be best possible, as can be seen by taking all functions from to .

February 16, 2016 at 2:50 pm

Let k = 4, and choose the functions {2, 2, 2}, {2, 2, 3}, {2, 3, 2}, {2, 3, 3}, {3, 2, 2}, {3, 2, 3}, {3, 3, 2}, {3, 3, 3}, {4, 4, 4}. I think the generalizations need to be a bit more complicated for them to work.

February 16, 2016 at 2:54 pm

What if you take the operation to be a&b, which is the maximum of a and b if they are different, and a+1 if they are the same. This might possibly work.

February 16, 2016 at 3:03 pm

Ah yes, good point. I was trying but failing to ensure that it wasn’t possible suddenly to “jump” to the maximum value.

I think it might be fruitful to search for a good conjecture here (maybe the one you have proposed is already a candidate) because it might give some insight into why Frankl’s conjecture is true — if it is.

February 16, 2016 at 3:05 pm

Amusingly, I formulated my revised conjecture after considering only the case — that is, the ternary case — where it coincides with the version you propose. So it would be interesting to see whether just this case looks true.

February 17, 2016 at 10:08 am

Here’s a different proposal for getting rid of the counterexamples that Thomas points out above. Given two functions , let us define to be the set of all functions with the following properties.

1. If and , then .

2. for every .

Now let the closure condition be that for every all functions in must belong to as well.

February 17, 2016 at 10:49 am

I think I may be misunderstanding this. Is pointwise maximum? If so, the second condition is equivalent to . Isn’t this making a different statement from FUNC for the case ?

February 17, 2016 at 10:52 am

Wouldn’t that be the equivalent of an upset (which I believe someone showed satisfies FUNC)? Given that the constant function f(x)=k is in F, we have for every g that all functions h such that g(x)<=h(x)<=k for all x are in F.

Pick a value in the domain of F, say 0. For every function f(x) with f(0) not equal to k, there is a unique function g such that g(x)=f(x) for all x except 0, where g(0)=f(0)+1. This shows that f(0)=k in at least 1/k+1 of the function in F.

I think its very hard to come up with a good generalisation for all k that isn't trivially true or has an easy counterexample.

February 17, 2016 at 10:54 am

Yes, Alec Edgington is right. I think that must be a condition on the generalisations that we use, that one of their cases has to actually be the original Frankl’s conjecture.

February 17, 2016 at 10:57 am

Sorry, ignore that comment — I’m not sure what I was thinking!

February 17, 2016 at 11:10 am

Just to clarify the condition I wrote down, suppose that . In that case, let and be the characteristic functions of and . Then for to belong to we require that is constant on , , , and . From the second condition, we must have that is 1 everywhere on , and also that it cannot exceed 1 on or 0 on . So the only thing left to choose is the value on . If we take this to be 0 then we get and if we take it to be 1 then we get . So we recover Frankl’s conjecture, and in particular do not get an up-set.

February 17, 2016 at 11:17 am

In the general case, if the constant function taking value everywhere belongs to the set, then for every function that belongs to the set, we get that all larger functions also belong to the set

as long as they are constant on sets where is constant. For example, if , and the function belongs to the set, then under this condition we would get the functions , , and from the closure operation.The idea behind it is to get rid of examples where there is a sudden “jump” from most sets to a much larger set.

February 14, 2016 at 3:38 pm |

Let me elaborate on the (bipartite) graph interpretation of the union-closed sets conjecture. Let be a (finite) union-closed family containing also , and X be their nonempty union, the ground set.\

Define a bipartite graph G with vertex set , where we make adjacent with all ; that is, G is the incidence graph of . (Warning: I suspect that the corresponding part of the proof of Theorem 6 of the Bruhn and Schaudt’s survey is confused, using this with an intersection-closed family!)

Now one can consider all subsets of vertex set that are (inclusionwise) maximal stable sets in G; they are exactly the complements of the vertex sets that are (inclusionwise) minimal vertex covers in G. The family of traces of these (inclusionwise) minimal vertex covers in G coincides with the family ; this is the point where the union-closedness is crucial.

Given any , there is an (inclusionwise) minimal vertex cover in G such that . One can see that

then consists of those sets with $B \setminus A \neq \emptyset$. Now double count all () edges of G; of them are covered by the vertices of A, and the edges not covered by A are exactly for

with and . We get for every ,

or equivalently, for every

that is what I am happy with…

February 14, 2016 at 3:57 pm |

An idle thought about weighted versions that I want to post but don’t have time to check for possible complete stupidity.

One is looking for a condition on in terms of the weights and . The idle thought was that a definition that generalizes the condition

if ,

which applies when all weights are 0 or 1 and the system is union closed, is the condition

.

That is, the weight of a union is at least the geometric mean of the weights of the two sets making up the union.

Aside from generalizing the normal union-closed condition (I think) to more general systems of weights on sets, this has the desirable property that if you multiply all weights by a positive constant, then it doesn’t change whether the condition is satisfied.

I haven’t looked at any examples, so it may be that there is nothing nice one can conjecture about abundances in weighted set systems that satisfy this condition.

February 14, 2016 at 4:01 pm

A small remark is that if , then we get that , which gives that or . It’s mildly encouraging that we don’t get that has to be at least as big as , which was a problem with another weighted version.

February 14, 2016 at 5:04 pm

That looks interesting, with a bit of a (probably spurious) ‘Hilbert space’ feel to it. A slightly unpleasant feature is that one obtains three different bounds on the weight of a triple union, namely , and the other two with the powers permuted. Maybe one would want to postulate in addition, and similarly for all unions of all arities?

Here’s a simple observation that may be relevant when thinking about counterexamples to weighted versions. For given , consider the set system

For either the new weighted version or the earlier one with , this set system is union-closed. Assuming FUNC, we find an element of abundance at least 1/2 in every , which means that its total weight is at least

. In order to keep this below some fixed value for all , it may help to keep the product roughly constant as a function of . One can therefore argue for trying to have a constant multiple of many sets with weight between and .

February 14, 2016 at 6:58 pm

I agree that that feature is unpleasant and was going to comment on it myself but you have beaten me to it. However, your proposed fix makes no difference, since the AM-GM inequality implies that

So I think the unpleasantness may be impossible to get rid of.

February 14, 2016 at 7:00 pm

Actually the AM-GM inequality was overkill there. It’s clear that a weighted average where you’re allowed to choose a permutation of the weights will be better than the usual average.

February 14, 2016 at 11:37 pm

Right, thanks!

Here’s a half-baked idea on how to derive the -weighted version of FUNC from plain FUNC. Maybe somebody who’s not currently a jet-lagged zombie will be able to tell whether anything like this has a chance of working.

Let me assume that all weights are integers in . By first taking rational approximations and then rescaling by a common denominator, this should be a wlog assumption.

For two such integer weights , we have the inclusion , which trivially is a homomorphism. Hence we can apply the fibre bundle construction in order to obtain a new union-closed family . It can be realized explicitly on the ground set (disjoint union). Here, it consists of all sets of the form with and . The assumed inequality for guarantees that this is union-closed.

What’s nice about this is that it encodes as the number of times that the -part of some set in this system is . So we have a partial inverse to obtaining a weighted system by restricting the ground set. Moreover, the abundance of some element in the -part is equal to the total weight in the original weighted system, and this is my main point.

Then the idea is to apply FUNC to . The problem with this is that now we not only have the elements of around as potentially abundant elements, but also those of . The abundance of these elements is monotonically increasing, with having an abundance of 1. So of course one can remove , but then may still have very large abundance. Hence applying FUNC to doesn’t tell us anything new. Is this because I’m not being clever enough, or because the whole construction is too trivial?

February 14, 2016 at 11:48 pm

Ever since this comment of Ben Barber I have been a bit suspicious of this weighted version. In particular, what do you suggest doing about the fact that a trivial counterexample is one where and all other sets have weight 0 (or tiny positive weight if you don’t want them to have zero weight)?

Note that with the geometric-mean suggestion, if , then for every we have that , so all other sets have weight 0 or weight at least 1. So for that version we can take the usual condition that should contain at least one non-empty set.

February 15, 2016 at 1:36 am

Yes, I agree. As I now realize, the assumption is merely a minimal requirement for the proposal in my previous comment to make sense; for example, it is implied by . But as you’ve reminded me of, one certainly needs to use something more than just that min-inequality.

February 15, 2016 at 8:54 am

I was wondering about how one might come up with a “genuine” counterexample to the min version of the weighted conjecture: maybe one could simply require that as the condition of genuineness.

An idea I had, but I have no idea whether it works and I think others may find it easier to check than I would, is to take the Renaud-Sarvate example (not including the empty set) and give all sets weight 1 except the triple of non-abundant elements, which has a higher weight — as high as possible while retaining the non-abundancy property.

Or maybe if this doesn’t work, one can do a kind of “layered” weighting where you give a large weight to the triple, a smaller one to the basis elements that intersect the triple, and a yet smaller one to the basis elements that do not intersect the triple, and then make all other weights minimal given those choices.

But perhaps there’s a simpler approach that does the job.

February 15, 2016 at 11:06 am

A further remark about the GM condition is that it makes it possible to think about variational arguments.

For example, let be a weighting that satisfies the condition. Define the abundance of in an obvious way: it’s the sum of the weights of sets containing divided by the sum of all weights. Suppose now that there is a set that has positive weight and contains no element with maximal abundance. Then we can try to reduce the maximal abundance by increasing the weight of .

If we increase the weight of from to , then we’ll be forced to increase the weights of some other sets. Specifically, if ever we have some set such that , then we’ll have to increase the weight of to . For very small this is approximately .

If one of the sets contains an element of maximal abundance, then this threatens to cause problems. In fact, I think it will cause serious problems if there are at least two sets that have this property and contain . That leaves me not having anything particularly nice to say in this situation.

What about replacing each weight by a small adjustment ? The condition that the errors must satisfy is not quite as simple as one would like, but I think it’s this. (I’m ignoring second-order terms.) Let’s call a pair

criticalif . Then for every critical pair we need.

What we would then like to find is a function with that property (which it could achieve, for instance, by satisfying that inequality for every pair and not just critical pairs) such that for every of maximal abundance. I think that would show that the maximal abundance can be decreased. (Actually, I think I also need a condition that so that the total weight remains the same.)

It’s potentially encouraging that the conditions here are convex. It makes it tempting to use the Hahn-Banach theorem to say that if we

can’treduce the maximum abundance by making an infinitesimal change to the weights, then there exists some functional with an interesting property.February 17, 2016 at 5:54 pm

I’ve been trying to find a counterexample to min-weighted FUNC together with , but so far to no avail. Tim’s suggestion to adapt the weight of the triple in the Renaud-Sarvate example sounds good, but doesn’t work because the triple already has an abundance close to 1/2, while the other elements have an abundance way beyond 1/2. Hence one cannot increase the weight of the triple enough to push the abundance of the other elements to below 1/2. The layered approach fails for the same reason. Something similar happens when fiddling with the weight of in Thomas Bloom’s example, as well as in an infinite family of higher-cardinality analogs that I’ve considered.

I’ve also been searching by hand for suitable weight functions on the power set of a 4-set, but again no luck. In case that anyone has further ideas on what to try to find a counterexample, I’d be willing to work them out.

On the positive side, thinking about this has led to another generalization of FUNC that may be worth sharing. [I had already tried to post this separately, but it may have ended up in the spam filter.]

Suppose that we start with the indicator weight of and increase the weight of some which doesn’t contain any abundant element, as in Tim’s suggestion above. A short calculation shows that this is guaranteed to satisfy min-weighted FUNC if and only if

The same holds true if consists of only abundant elements and we decrease its weight. So here’s a generalization of FUNC: suppose that consists of only abundant or only non-abundant elements. Does this imply the inequality above? (To get a genuine counterexample to min-weighted FUNC, we’d also like to have , but the question may be interesting regardless.)

If we were to consider the inequality above for

all, then we’d be asking whether the added abundance of the two most abundant and -separated elements is at least 1.February 17, 2016 at 5:56 pm

Correction: the inequality should have been the other way around, .

February 14, 2016 at 6:41 pm |

I have a few thoughts about the Horn clause way of thinking about the problem suggested by Tobias Fritz. Observe if we have any set of conditions and choose any condition in our list then it does not change the resulting set system if we add conditions where . Therefore it is safe to add all conditions where . From now on, assume that we have added all such possible conditions.

Define and . It suffices when calculating to only check the conditions in . To see this, first note that if with then any X not satisfying C must be a subset of , which is of size at most , so all conditions with large size can be ignored. Next, let and assume that all conditions in are satisfied but not all in (induction backwards). Choose some unsatisfied and consider the corresponding for in . By our assumption, these are all satisfied, so which implies and since then (contradiction). This proves it is only necessary to check conditions for any set size l.

A set is forbidden if and only if it is for some A which is part of a condition (i.e. if there exists with ). Define the projection of a condition as . Then this observation shows that the number of forbidden sets must be exactly the number of unique sets in , therefore .

Define and . We now find a set of conditions to use with the set to work out . Clearly should ignore conditions of the form (not applicable). Recall any condition size only forbids in , so if then will be in the forbidden complement, so these conditions can also be ignored. Therefore I claim that the conditions is our set of conditions for working with set . Define . This means the size is .

February 14, 2016 at 6:41 pm

Summing each of these bounds gives us and . If FUNC fails then for all it holds that . Rewriting this, if FUNC fails then all it holds that . Summing this it would imply that the average size of sets in is strictly less than , which seems almost definitely false given the properties of . (It is almost true that for any that all is also in but in fact if then we only know all supersets not including are inside). But still, may be promising?

February 14, 2016 at 7:00 pm

(Sorry, at some point I accidentally switched between using and – they mean the same)

February 14, 2016 at 8:02 pm |

Sorry, could this be moved to the proper place? It was intended to be a reply to Tim’s 7:00pm comment…

February 15, 2016 at 12:58 am |

A generalization of the conjecture that I think is interesting, but I don’t know if there are any counterexamples, is the following:

If is a basis of a union closed family we define:

Then the conjecture is that

February 15, 2016 at 1:34 am

We can prove the conjecture for by induction on . If we take the sets are disjoint, so we assume that if then . Let be the basis for , we take and such that , then we split in such that is the set generated by and is a set such that . It’s clear that , then our induction hypothesis is valid and since then claim follows.

February 15, 2016 at 7:06 am

Are you trying to show that if there is unique element that occurs in the maximal number of ground sets, then this element is automatically abundant?

Unfortunately, this is not the case: let and let consist of all subsets of together with all sets where contains at least two elements. This union-closed family has as generators the singletons 1,2,3 and 4 as well as the sets of the form for distinct . Hence 0 is contained is 6 generators, which is more than any other element, but has an abundance of only 11/27. Or did I mess something up?

February 15, 2016 at 11:39 pm

Yes, Tobias Fritz, you’re right, thank you! I believe this proof is valid if there’s only 1 element that appear 2 or more times in the ground set.

February 16, 2016 at 9:44 am |

Here’s another strengthening that has come up in the search for counterexamples to the naive min-version of weighted FUNC (with ).

Let be an arbitrary set containing only non-abundant elements. Is it then necessarily the case that ?

In order for this to hold, the second term on the left must be , establishing the existence of an abundant element. Hence a positive answer implies FUNC.

The Renaud-Sarvate example satisfies this, as well as our earlier example due to Thomas Bloom (in his notation, every -family works). So far I haven’t been able to find any other counterexamples either.

February 17, 2016 at 2:08 pm |

Let me outline another strengthening of FUNC that I mentioned above. It may be more convenient to represent the conjecture for complements of the sets in the original family and thus talk about an intersection-closed family . Let be those sets in of size . So we can think about as a sequence of -uniform hypergraphs.

Now for an element of the ground set let the number of -sets containing , and be the number of -sets not containing .

We want to prove that there exists such that

(1)

(2)

…

(k) .

for every .

(In FUNC we have to exclude a family containing only the empty set and here we have to exclude the family containing all the elements of the ground set.)

(1) is always true but every new imposed inequality exclude some of the elements. “All” that we need to prove is that some elements survive when we move from to latex $k+1$. Maybe, for such a step some averaging argument (like those failing for FUNC) will come back to life?

February 18, 2016 at 4:22 am

This is a comment to related to the above comment but also to the other strengthening. As Igor Balla remarked https://gowers.wordpress.com/2016/01/21/frankls-union-closed-conjecture-a-possible-polymath-project/#comment-153911

For all we know, maybe we can replace the condition

(*) ” F is union closed”

by the condition that

(**) there is a disjoint family of Boolean intervals [A_i,B_i] such that F are the A_i’s and the B_is form a filter (up-set).

We dont know that (**) does not imply FUNC as well as various sttengthening like mine above. (Since I moved to complements in the formulation above we need to complement (**)) as well.)

February 18, 2016 at 1:40 pm

Assume to be the union closed system generated by all intervals of length . Then, the first non–interval sets appear at cardinality larger than and here we have more sets in than in . Thus the above sequences represent the worst case and your conjecture follows.

That means:

Let be the union closed set system generated by all intervals of length in for prime. Then satisfies your conjecture (and thus there is an injective map with and thus FUNC).

February 18, 2016 at 1:42 pm

(The comment before this got scrambled and can be deleted.)

Assume to be the union closed system generated by all intervals of length . Then, the first non–interval sets appear at cardinality larger than and here we have more sets in than in . Thus the above sequences represent the worst case and your conjecture follows.

That means:

Let be the union closed set system generated by all intervals of length in for prime. Then satisfies your conjecture (and thus there is an injective map with and thus FUNC).

February 18, 2016 at 1:44 pm

I see. My comments are too long. Sorry for this.

February 18, 2016 at 1:52 pm

The proof idea for the following is too long to fit in a comment:

Let be the union closed set system generated by all intervals of length in for prime. Then satisfies your conjecture (and thus there is an injective map with and thus FUNC).

I have to think about how I can get it here. Maybe I put it on the Wiki. My other comments in the thread can be deleted.

February 18, 2016 at 4:16 pm

It would be great to have the comment here. I wonder whether perhaps you accidentally used a “less than” symbol, which can cause big chunks of text to disappear, especially if later on there is a “greater than” symbol.

One possibility might be for you to email me the comment. I can then post it for you (and can do so in such a way that it will be under your name).

February 22, 2016 at 3:33 pm

I did some computer experiments with union closed set systems generated by intervals in and so far all of them did (easily) satisfy this conjecture.

Any idea on where to search for a counterexample are welcome.

February 22, 2016 at 3:48 pm

This is a non-garbled version of the comment posted earlier in this thread.

Assume to be the union closed system generated by all intervals of length in . I try to show that at least some of these systems satisfy your conjecture.

Since each set is also in , by translation invariance all its translates are in . I want this translates to be different, therefore I fix prime. Then of these sets of size are in and of these sets of size are in .

If we repeat this for all cardinalities from to we can see that (in this situation) contains set of cardinality , sets of cardinality and so on, whereas contains sets of cardinality , set of cardinality and so on

However, there is an obstacle. The set of sets of a fixed cardinality in does not need to be generated by just one element. If that happens for small cardinalities less or equal there will be much more sets (of this cardinality) in than in . I fix that brute force be requiring . Then, the first non–interval sets appear at cardinality larger than and here we have more sets in than in . Thus the above sequences represent the worst case and your conjecture follows.

That means:

February 17, 2016 at 4:32 pm |

It seems to me that we are beginning to put together quite a nice collection of strengthenings of FUNC. Of course, they have not yet been very stringently tested, so it is likely that some of them will fall. But if any of them survive and turn out to be new, then this project will have achieved something worthwhile.

In this comment I want to amplify on a remark I made earlier about how a weighted version of FUNC might lend itself to a variational approach. Here, once again, is the weighted version. Let be a non-negative function on the power set of a finite set and suppose that for at least one non-empty set . Suppose also that for every pair of sets . If all the weights are 0 or 1, then this is precisely saying that we have the characteristic functions from a union-closed family that contains a non-empty set.

The conjecture would now be that there exists such that

.

It is quite possible that there is a completely trivial counterexample to this conjecture — I have not looked carefully. So there’s a danger that this comment is a waste of time. But let me make it anyway.

What I would like to find is a nice condition that must satisfy if it is impossible to reduce the maximum abundance of any element of . And then in my dreams I’d like to show that the only set systems that satisfy this nice condition are something good such as power sets with duplicated elements. In other words, I’d like to find a smooth path from an arbitrary system of weights to a system of weights where all abundances are 1/2, while all the time not increasing the maximum abundance.

So let’s take a system of weights and let’s assume that it cannot be improved by an infinitesimal change.

I’m going to assume for convenience that all weights are non-zero, since the maximum abundance depends continuously on the weights, so if it is always at least 1/2 for non-zero weights, then by a limiting argument it is always at least 1/2 when the weights are allowed to be zero.

So now we can represent an arbitrary perturbation of the weights as replacing the weight by the weight . We will get an improvement to the maximum abundance if we can choose the to have the following properties.

(1) For every such that , .

(2) .

(3) for every of maximal abundance.

Let me briefly explain why this is. The total change to the weights is , so condition (2) is saying that the sum of the weights does not change. The change to the geometric mean is, to first order, , so in order to preserve the GM condition we need condition (1) (and it is sufficient, up to first order). The change to the total weight of sets containing is , so if that is negative, then because the total weight over all sets has stayed the same, the abundance of has decreased. (Furthermore, it has decreased by a first-order amount, which is why second-order errors in (1) do not matter.)

Since this comment is getting long, I’ll finish it and continue with a new one, in case I lose what I’ve written.

February 17, 2016 at 4:51 pm

The plan now is to dualize. That is, if we cannot find satisfying the above system of inequalities, there should be a separating functional of some kind. Such functionals are often useful, so it seems like a good idea to work out what this one is, and what properties it has.

The set of satisfying condition (2) is a hyperplane, so actually exhibiting the functional is not a challenge: it is the hyperplane itself. More formally, it is the linear map .

Let be the linear functional , and let be the linear functional . Then the assumption we are making is that if for every critical pair (that is, a pair such that ) and for every of maximal abundance, then . It is clear that if this is the case then it must in fact imply that .

But this is equivalent to saying that is a non-negative linear combination of the such that is a critical pair and the such that has maximal abundance. Since the sum of the values of are zero and the sum of the values of are positive, we also get that at least one occurs with a strictly positive coefficient.

Disentangling that slightly, we have non-negative coefficients and such that for every set we have the identity

.

Here is the set of elements of maximal abundance, and if , if , if (and if more than one of these happens one just adds up the relevant values), and 0 otherwise. Also, the sum over is over the critical pairs.

It’s not clear to me where we can go with a condition like this, but I think it might be worth thinking about.

Also, there’s a significant chance that I’ve made mistakes in the above.

February 17, 2016 at 9:36 pm

One thing that puzzles me about the statement ‘the maximum abundance depends continuously on the weights, so if it is always at least 1/2 for non-zero weights, then by a limiting argument it is always at least 1/2 when the weights are allowed to be zero’, which sounds perfectly true, is that if we do assume the weights are non-zero, it follows that whenever , which is a much stronger condition.

I think the problem must be that it is not always possible to perturb a weighted family with zeros that satisfies the condition to a nearby weighted family without zeros that satisfies the condition.

February 17, 2016 at 10:19 pm

Yes that’s quite right. I had previously understood the importance of letting some sets have zero weight, but wishful thinking led me to believe my wrong argument (which is wrong for exactly the reason you suggest).

So I think that the best that the approach outlined above can give is a condition on the weights once you have decided which ones are non-zero. A potential disappointing outcome is that the argument would show that the best system of weights was one that was 1 or 0 everywhere. I’d be glad of a simple counterexample to that.

Here’s an attempt at one: . Here 1 and 2 have abundance 4/7 and 3 has abundance 3/7.

Suppose we now slightly increase the weights of 13, 23, and 123 to . Then 1 has weight , as, by symmetry, does 2. Also, the GM condition is satisfied, since the weights are monotone. But tragically I now see that the abundance of 1 has gone up rather than down.

Instead, let me reduce the weights of 13 and 23 to . Then the weight of 123 must be at least 1-c (ignoring lower order terms). I think otherwise we’re OK. Then the abundance of 1 becomes , I think. This looks better: 5c is a smaller proportion of 7 than 3c is of 4.

February 18, 2016 at 2:06 am

For further reference it may be worth pointing out that the variational duality approach above looks like an instance of the Karush-Kuhn-Tucker conditions.

A nice way to write this kind of weighted FUNC as an optimization problem may be as follows:

maximize

subject to:

The idea here is that one tries to maximize the total weight, subject to the usual constraints on plus the additional assumption that all elements have a weighted absolute abundance of at most 1/2. Weighted FUNC then claims that the solution of this optimization problem is no larger than 1.

February 18, 2016 at 3:29 am

Duality for KKT recovers something like Tim’s approach. Following the notation here: https://en.wikipedia.org/wiki/Duality_(optimization)#The_non-linear_case, Tobias’ optimization problem is

minimize , where

over the domain $\mathcal D$ of $w$ with all components non-negative, subject to

where

and

.

If is the minimal feasible value of , then we wish to show that .

The dual problem begins by defining the Lagrangian

,

where .

Then we define the dual objective function by

.

Since for all , it suffices to show that .

February 18, 2016 at 3:37 am

Nice! I think that we’ll get some prettier expression by using the constraint instead.

Also, maybe it would be interesting to perform the optimization numerically, say for sizes of the ground set between 3 and 5?

February 18, 2016 at 5:27 am

Ah yes, that is a nicer constraint.

My advisor is teaching a “data science” course that is something like combinatorics and linear programming plus python. Maybe some of his students would be interested in running numerics.

February 18, 2016 at 12:33 pm

If a counterexample to this weighted version doesn’t emerge soon, then it would certainly be very nice to have some data to look at about the optimal weights for various set systems that we have been considering. (I’d be interested to know what the right weights were for the Renaud-Sarvate example, for instance.)

February 18, 2016 at 7:10 pm

If there is a counterexample to Tim’s GM-weighting conjecture, then its support must be a counterexample to Gil’s previous conjecture about injections such that for all . Indeed, if is such an injection, then for any supported on that satisfies the GM condition, .

So the first place to look for a counterexample to the GM-weighting conjecture is the example considered previously.

February 18, 2016 at 10:12 pm

I think we can eliminate any counterexample supported on the intervals of of length at least 2, by an averaging argument. Observe that

(where is the size of the ground set). Writing , this is . If we can show this to be non-negative then we’re done. In the case in question this is . Since we can map the 2-intervals bijectively to the 3-intervals in such a way that each maps to a superset of itself (just extend it by one to the right), we have that , and the other terms are non-negative, which I think completes the proof. However, I have a niggling doubt that it was too easy and I’ve made a mistake.

February 18, 2016 at 9:00 pm |

Here is yet another strengthening of FUNC, which is ‘weaker’ than Gil’s injection-to-supersets conjecture but ‘stronger’ than Tim’s GM-weighting conjecture.

Let be a union-closed family over a finite set , and let be such that whenever and . Is there always an such that ?

(Of course, could be replaced by , or , without changing the substance of the question.)

February 18, 2016 at 9:18 pm

Ah, sorry, that is exactly equivalent to Tim’s GM-weighting conjecture, isn’t it?

February 18, 2016 at 9:25 pm

Also, the statement is false if we allow negative values for , so we have to add that .

February 18, 2016 at 9:47 pm

Interesting observation!

For fixed and real-valued , this conjecture is a linear programming problem, which makes it very amenable to a numerical approach. I’d suggest using a reformulation similar to the above, where we impose all elements to have weighted absolute abundance of at most 1/2 and try to maximize the total weight:

maximize

subject to for all

for

for all

The conjecture is that the optimal solution is no larger than 1.

I could run this on some examples, such as Renaud-Sarvate, over the weekend. But if somebody else wants to do it, please go ahead!

February 18, 2016 at 9:48 pm

Sorry, the right-hand side of the last inequality should have been 1/2…

February 18, 2016 at 11:28 pm

Isn’t it also equivalent to my “injection-to-larger” conjecture (which I think is itself equivalent to my sequence of inequalities)?

February 18, 2016 at 11:50 pm

Ah, I hadn’t spotted that equivalence. I think it makes my conjecture less interesting — I had thought previously that it was possible for to lie strictly between and . At the very least, it means that stating it in terms of geometric means is silly. The condition is simply that is supported on a union-closed family and that it is monotone on that family. But it is still mildly interesting that choosing to be 1 everywhere is not always optimal.

Maybe something GM-ish could be recovered if the condition were modified to one that merely says that when is not equal to either or .

February 19, 2016 at 8:45 am

Gil, could you explain the equivalence to your injection-to-larger conjecture?

It would definitely be interesting to run this through a linear optimization program for various union-closed families.

If true (and it does seem hard to find a counterexample), it’s conceivable it could be more amenable to an inductive proof.

February 19, 2016 at 9:58 am

For what it’s worth, given the equivalence noted by Alec, the condition for not being able to improve the weights can be modified as follows. (By the way I should confess to a mental block here. I’ve never been able to learn about duality of linear programming, Lagrange multipliers, etc., so whenever such a problem comes along I have to think about it in my own equivalent way. Apologies for this.)

There are non-negative coefficients and , where the are defined for all with and the are defined for all of maximal abundance, and they have the property that for each ,

.

February 19, 2016 at 10:13 am

As a quick example, if we have the system , then is the unique element of maximum abundance. Set . That contributes a weight of 1 to every set except the empty set. Now let and all other be zero. Adding these contributions gives us , and also for all the other sets, so we have shown that the constant weight cannot be locally improved.

Slightly disturbingly, that also seems to show that any allowable system of weights that is constant on the non-empty sets cannot be locally improved, but that is surely false, since one would like to give as much weight to the empty set as possible. I’ll have to think about where I’ve gone wrong.

February 19, 2016 at 12:23 pm

Maximizing subject to for all and for all on the union closed set system (of cardinality 119) generated by {{7, 8}, {7, 9}, {7, 10}, {8, 9}, {8, 10}, {9, 10}, {1, 7, 8}, {2, 7,

9}, {3, 7, 10}, {4, 8, 9}, {5, 8, 10}, {6, 9, 10}, {1, 2, 3, 4, 5,

6}} yields a weight of for all sets containing {1,2,3,4,5,6} and a weight of for all other sets. The total weight would then be .

February 19, 2016 at 3:44 pm

I did not say they are equivalent. I only asked about it.

Actually I want the same x for every f (do you expect it in your version?) but I think I am only looking at f of the form that it is 1 if the set hast at least t elements.

February 19, 2016 at 3:46 pm

Since the conditions on make sense for any set system, it would be nice to have examples where the maximum is larger than 1.

If there were a nice family of examples, a reasonably strategy would be to prove that a set system whose optimal weight is above 1 is similar to an extremal example, while presumably the extremal examples are not union-closed.

It would also be nice to know if the optimum value of is Lipschitz as a function of the set systems (with the Hamming metric on collection of set systems). E.g. in Domink’s example, throwing away a few elements of the set system doesn’t change the maximum weight weight by more than per set. If we knew this in general, then the condition of having maximum weight more than 1 would be “robust”.

February 19, 2016 at 4:32 pm

For non union closed families the maximum can be arbitrarily large, just take a set system of singletons.

I also ran my script on the 27 sets from the Renaud-Servate example. There the optimal weights are for all sets containing 123 and for all other sets. The maximum corresponding to these then is .

February 19, 2016 at 5:10 pm

I can confirm Dominik’s results for both the 119-set and the 27-set families.

February 19, 2016 at 5:28 pm

Postulating the same to work for all is too strong: take to consist of all subsets of that are missing at most a singleton, and let be the indicator function of minus . Gil, could you elaborate a bit on your additional condition on to be on sets of at least elements? What is ?

Let me try to understand the results of Domink’s and Eckhard’s computations in the case of Thomas Bloom’s example. The weighted abundance of each of ends up being , while the weighted abundance of each of is , so that all the inequalities are tight. This must be where these funny weights are coming from. Since the total weight is , there is no counterexample here.

(It should be possible to obtain a small improvement by throwing in the empty set, but it will be too small to result in a counterexample.)

February 19, 2016 at 6:06 pm

I wrote, “Postulating the same to work for all is too strong [..]”. Please ignore that claim and the example. I must have been out of my mind.

February 19, 2016 at 6:50 pm

Gil: ah, interesting, indeed maybe one can have the same for every . Maybe any abundant element would work?

February 19, 2016 at 7:16 pm

If one requires to be independent of in our current version of weighted FUNC, then this implies Gil’s injection-to-larger conjecture: simply take to be the indicator function of all sets of size at least in and of size at least in . It then follows that the cardinality of the former set is at most the cardinality of the latter set, which is equivalent to Gil’s conjecture.

Summarizing, we have the following implications:

wFUNC with depending on

injection-to-larger

Observed by Alec, the arrow on the right is relevant in that it shows that every counterexample to the other conjectures must also be a counterexample to the (already disproven) question about injections with .

February 19, 2016 at 7:30 pm

Now that diagram got messed up… let me try again as ASCII art:

Gil’s injection-to-larger

Λ

II

wFUNC with constant <= Gil's

II

V

wFUNC with depending on

One more thing: I think that wFUNC with constant holds if and only if is abundant in every upper set in . In one direction, the indicator function of every upper set is monotone, and hence must be abundant in the upper set. In the other direction, note that every monotone is a positive linear combination of indicator functions of upper sets.

February 20, 2016 at 11:57 am

Nice. I’ve added a section to the wiki listing various strengthenings that have been proposed and relationships between them.

February 20, 2016 at 8:03 pm

I’ve just realized that the above-claimed implication from constant- weighted FUNC to -depends-on- weighted FUNC is incorrect. In fact, constant- weighted FUNC is false. Let again be the subsets of generated by the 2-intervals. Without loss of generality assume . Then we can assign weight 1 to each of the 7 sets containing , and weight 0 to all the other sets. This is a monotone weighting function but the sum of the weights of the sets containing 0 is only 3.

February 20, 2016 at 8:57 pm

Sorry, not thinking straight: the implication is indeed correct, but the conjecture is false.

February 19, 2016 at 3:16 pm |

Let me return to the strengthening I mentioned above (which may be related to the other ones) described in a dual form.

We have a ground set of n elements. Let’s call them vertices.

Now we have a sequence of k-uniform hypergraphs, k=0,1,2…,r . contains the empty set. is a set of vertices. is a graph. is a collection of triples and so on. We let .

The crucial property is that the intersection of two edges in is also in .

We want to identify vertices x with the following property:

P(j): In there are at least as many edges not containing x then edges containing x.

We denote the difference between these numbers by .

This is stronger then FUNC but maybe not by much.

For j=1 this is true where e(x) is 0 or 1 according to x being in or not.

For j=1 the x’s with negative e are those vertices of which has at least 3 neighbors and moreover have at least 3 neighbors not in . There could be such vertices but a simple inspection shows that there must be x’s not of this kind. So perhaps we should try to examine the next couple of values for . (Maybe this coincides with things that are known.)

February 19, 2016 at 5:23 pm |

It’s for sure not very efficient but here is a web version of the maximization to play around with: https://www.wolframcloud.com/objects/56d97a3d-f2e7-4669-b28b-e618bc31398d

It accepts set systems in a Mathematica style curly braces syntax, e.g. {{1,3},{2,4},{5}}, then generates the smallest union closed set system containing the given system and performs the maximization subject to for all and for all .

The Mathematica Code for generating the union closed set system is

UnionClosedSetSystem[A_] := (Atemp = A;

While[Length[

DeleteDuplicates[

Union[#[[1]], #[[2]]] & /@ Tuples[{Atemp, Atemp}]]] >

Length[Atemp],

Atemp = DeleteDuplicates[

Union[#[[1]], #[[2]]] & /@ Tuples[{Atemp, Atemp}]]]; Atemp)

The code for the maximization is:

MaximizeWeights[B_] :=

NMaximize[{Total[w[#] & /@ B],

Apply[And, (w[#] >= 0) & /@ B] &&

Apply[And,

Complement[

If[SubsetQ[#[[2]], #[[1]]], w[#[[2]]] >= w[#[[1]]], 0] & /@

Tuples[{B, B}], {0}]] &&

Apply[And,

Table[1/2 – Total[w[#] & /@ Select[B, MemberQ[#, k] &]] >= 0, {k,

Min[B], Max[B]}]]}, w[#] & /@ B]

February 19, 2016 at 5:36 pm

Very nice! Running this on Renaud-Sarvate plus the empty set results in an optimal value of , with the same optimal weights on the nonempty sets, and being their minimum. Reasonable.

February 19, 2016 at 6:05 pm

Thanks!

I tried a few cyclically symmetric families (all unions of rotations of some set) on this program and they all came out with all weights equal. I wonder if this is a property of symmetric families in general (and also whether there are any non-symmetric families that have this property).

February 19, 2016 at 8:33 pm

Without checking whether everything I say is true, I think it should be the case that symmetric families will come out with all weights equal, because it should be that a convex combination of weights is at least as good as the worst weight, so one can take a convex combination of an optimal weight with all its symmetries.

February 21, 2016 at 12:48 am

Regarding Alec’s comment, I am not sure if this is still relevant for the specific property that symmetric properties might have, but let me mention two properties that are weaker than being symmetric. (By symmetric we mean invariant under a transitive group of permutations on the ground set).

1) weakly regular: every element is contained in the same number of sets from the family.

2) more regular: every element is contained in the same number of k-sets from the family for every k.

I would expect that properties for symmetric families relevant to FUNC and its extensions will largely extend to the “more regular” version.

February 20, 2016 at 6:00 am |

It’s interesting that there are exactly two weights in the Renaud-Sarvate type examples. Perhaps gaps in weights can be used to cluster the sets.

It also seems that the weights only depend on the minimal elements, although I’d be surprised if that was true in general.

Has anyone found an example with three distinct weights?

February 20, 2016 at 9:35 am

There appear to be plenty of examples with three distinct weights, even for ground sets as small as {1,2,3,4}. For example, the 13-set family has an optimal weight tally of 1/22 (4 times), 3/44 (5 times), and 7/88 (4 times), given a total of .

All cyclically symmetric families on ground sets with at most 8 elements have all optimal weights equal and total less than one.

February 20, 2016 at 10:14 am

Are your cyclically symmetric families all generated by the translates of a single set, as in Alec’s case? Or did you also try taking with the cyclic shifts of two or more sets to take the union closure of?

In the case of a single generator up to symmetry, I don’t find it so surprising that all the weights are the same: by symmetry and convexity, we can assume wlog that all generators have the same weight (see Tim’s 8:33pm comment). Given this weight for each generator, we obtain optimal abundances if we choose the weight of the other sets as small as possible, i.e. equal to the weight of the generators.

It would be a bit more puzzling if we still always had equal weights in the case of two or more generators up to symmetry. Does this happen as well?

February 20, 2016 at 10:31 am

Thanks, I should have qualified my statement. I had only tested families that were generated as the union-closure of cyclical translates of a single set. I ran a quick test and it seems that the observation remains the same for families generated by rotations of up to five sets on ground sets with up to five elements.

February 20, 2016 at 12:21 pm

It seems on evidence so far that the more symmetric components there are in , the more distinct weights there are.

To make that precise, for , say if there’s some permutation of such that and permutes the sets in . Is the optimal number of distinct weights always less than or equal to the number of equivalence classes under this relation?

February 20, 2016 at 12:23 pm

I found an example of a cyclically symmetric 30-set family in which the optimal weights are not all equal. It has ground set {1,2,3,4,5,6} and is generated as the union-closure of rotations of {1,2,3} and {1,2,4,5}. In my script, the optimal weights come out to be zero for the three translates of {1,2,4,5}, and 1/36 for the remaining 27 sets in the family, for a total of 3/4.

I find the example a bit unsatisfactory, however, because it seems too “reducible” in the sense that removing the three zero-weight sets preserves union-closedness of the family.

The search continues.

February 20, 2016 at 8:41 pm

For Eckhard’s 30-set family, I could only find 28 sets. We have one 6-set, six 5-sets, at most fifteen 4-sets, and six 3-sets, plus perhaps the empty set, which gives at most 1+6+15+6+1 = 29.

In addition, I could only find 14 sets containing 3, one of which has zero weight, which would mean that the total weight of 3 (and hence any element, by symmetry) is 13/36. But this means that we could multiply all of the non-zero weights by 18/13 while still satisfying the constraints, which gives a total weight of (18/13)*(3/4) = 27/26 > 1.

There’s a very significant chance I made a mistake, since I was working by hand. Can anyone else verify either one of our calculations?

(One possible resolution is this: I think the correct total number of non-empty sets is 1+6+9+6 = 22: one 6-set, six 5-sets, nine 4-sets, and six 3-sets. If we increase the weights to (1/36)*(18/13)=1/26, then the total weight is 22/26 < 1)

February 20, 2016 at 9:13 pm

You’re right. I described my example incorrectly: its generators should be the _three_ sets {1,2,3}, {1,2,4,5} and {1,3,5}. Sorry for the confusion! Then the 30 sets break down as eight 3-sets, fifteen 4-sets, six 5-sets, and one 6-set. I agree that the example, as originally described, results in a 22-set family. However, I obtain constant optimal weights of 1/30 and a total of 11/15, which is less than your 22/26.

February 20, 2016 at 11:41 pm

Ah okay, thanks for the clarification. I trust your total of 11/15 for the 22-set family.

So for the 22-set family, the translates of {1,2,4,5} have weight 1/30, even though they can be removed?

February 21, 2016 at 8:27 am

Note that in Eckhard’s 30-set example, the optimal solution is not unique: one can assign weights to the three translates of and to all other sets, provided that and (and , but this is implied by the previous condition). In particular, one can choose equal weights .

So, to refine the question slightly: for symmetric families, is there always an optimal solution with all weights equal?

February 20, 2016 at 4:32 pm |

This is an attempt to “explain” the patterns we’ve seen in the weights. (I believe it fits the examples with one and two weights, but I haven’t tested it against the new examples.)

The simplest way to satisfy the constraints if is the following:

For all , we have , unless there is some such that . (This only matters if we have strict inequality, but I don’t want to confuse wordpress.)

Thus, the simplest way to choose weights is to weight the minimal elements in a way that avoids conflicts.

How do the weights go if we choose this simplified scheme?

Let minimal elements be . Choose the with the *smallest upset*

and let every element of have same weight. For convenience, let’s assume that .

Now we repeat this process for , greedily choosing the minimal with smallest upset in .

Some questions are:

– does this pattern hold for more complicated examples?

– are there examples where some set above the largest minimal element has weight larger than the largest minimal element?

February 20, 2016 at 4:49 pm

Sorry, “largest” minimal element is incorrect, it should be minimal element with smallest upset.

An idea for an inductive proof would be to show that it is optimal to weight every set above by , then show that the set system satisfies suitable hypotheses to continue the proof.

(P.S. Tim, this idea is taken from Giorgis’ work, where he removes “channels” from Ruzsa’s layered graphs.)

February 20, 2016 at 4:54 pm

Actually if I understand Eckhard’s example from Feb 20, 12:23pm, then, the answer to the second question is “yes”, since the translates of {1,2,4,5} have weight zero, but the elements of their upset have weight 1/36.

So this simple scheme is not always the optimal scheme. Although as Eckhard mentions, the translates of {1,2,4,5} can be removed.

February 21, 2016 at 2:59 am

Since both Brendan and Miroslav seem to be thinking along these lines, let me try to clarify a bit more the decomposition of a weight function into a positive linear combination of indicator functions of upper sets.

Let be nonnegative and monotone. As we’ve already observed, is an upper set in . Now let be the largest value attained by , and consider , where is the indicator function. Then is still nonnegative and monotone, but the largest value that it attains is smaller by one. By induction, we therefore obtain a decomposition of of the form , where ranges over all upper sets in and . Of course, it follows that the same kind of decomposition also works if is real-valued.

However, it may be worth noting that such a decomposition is not unique. For an example, let be the function that assigns to every subset its cardinality. This decomposes into the indicator function of all sets that contain plus the indicator function of all sets that contain , while the prescription decomposes it differently into the indicator function of plus the indicator function of all nonempty sets.

By the way, does anyone see a relation between our current version of weighted FUNC and Möbius inversion? (I don’t know enough about the latter to say, but have the feeling that it might be relevant.)

@Miroslav: to get the LaTeX to display here, you have to add “latex” after the dollar sign, like this: $latex

February 21, 2016 at 6:57 pm

Nice! This is a good way of putting it.

From the examples, it seems that the weights depend on the generators of the set system, so perhaps we improve our decomposition by summing only over the “principal upsets” given by the generators.

I wonder if Mobius inversion can be used to do “partial summation” to convert between and . According to wikipedia, the zeta function and Mobius function of an incidence lattice are analogous to integration and differentiation, so perhaps we can express our ‘s as “derivatives of indefinite integrals” and “integrate by parts”.

February 21, 2016 at 7:00 pm

Here’s a link to the relevant wikipedia page: https://en.wikipedia.org/wiki/Incidence_algebra

Ah and I see it also says that for the natural numbers with the standard order, Mobius inversion is related to differencing, so perhaps the “Mobius inversion as integration by parts” heuristic is standard.

February 22, 2016 at 12:36 am

Ah, yes, looking at the principal upsets is a good idea. In fact, if we have any function , the Möbius transform is , where denotes the principal upset of . This is indeed of the form , but with the sum only ranging over the principal upsets.

Conversely, Möbius inversion will therefore decompose any function into a sum over indicator functions of principal upsets. For dimensional reasons, this decomposition is now unique.

What seems less clear is how to recognize the Möbius inverse of a

monotonefunction as opposed to the Möbius inverse of just any function. Certainly the Möbius transform of a nonnegative function will be monotone, but the converse often fails: the indicator function of a non-principal upset is not a nonnegative linear combination of indicator functions of principal upsets, and therefore its Möbius transform must take on at least one negative value.Anyway, does any of this help us to explain the patterns that we’ve seen in the data?

February 20, 2016 at 10:18 pm |

One can derive certain structural properties of any optimal solution to our linear maximization problem. Keep an optimal solution fixed, and denote

the set of “tight” points for this constraint. Also, assume that

be the sequence of all achieved values.

One crucial observation is that, for any ,

is also union closed family.

There are obvious relations between the restriction of our solution to this family, and the optimal solution (possibly scaled) computed for this new family.

Every set in the complement of this new family (for hits “tight point set” , otherwise some inclusionwise maximal elements can increase their weight, that contradicts optimality of .

Further discusion need to split according to how the tight points for new problem relate to the original ones, for example.

In some scenario we may scale up on one piece, and scale down on the other one, to satisfy the constraints. The fact that our optimality of assumption will not increase the total weight, will then tell us a useful information about the solution .

February 21, 2016 at 1:20 am |

There might be a mistake in here somewhere, but it seems to me that one can prove FUNC by combining Tobias Fritz’ description of union-closed families with Fourier analysis in the following way:

We use to denote the set , for the power set of , for the vector whose th coordinate is and all other coordinates , and, as usual, for the Fourier transform of of , where .

Regard subsets of as elements of in the usual way ( being identified with the vector , where iff ), let be any union-closed family of sets, and let be the indicator function of . Then the element belongs to at least half the sets in iff $\hat{f}(e_i)$ is not positive.

Let be the list of conditions defining in Tobias Fritz’ description, being shorthand for . Since would be tautological in , we can require that for all (this will be relevant below). Now consider the polynomial . It evaluates to iff and for every $j \in B_i$, and otherwise to . Hence is the indicator function of the family of all sets satisfying , and hence .

Consider the function defined by and . Note that . The Fourier coefficient equals the coefficient of in the polynomial .

Note that the constant term of is . For the coefficient of in , we have the following:

If , then the coefficient of in is .

If and , then the coefficient of in is . Note that “ and ”

is simply equivalent to “,” since for all .

If and , then the coefficient of in $1+\frac{x_{b_k}-1}{2} \prod_{j \in B_k} \frac{x_j+1}{2}$ is .

Therefore, the coefficient of in is . Defining , we obtain the simpler expression of for the coefficient of in .

Since by double counting and , we have .

Now the proof of FUNC is simple: We know that satisfies FUNC if it contains a singleton set, so suppose it doesn’t: In that case, we have for all , and hence . Since is not positive, at least one of the must not be positive either, and FUNC is proven.

February 22, 2016 at 3:12 pm

Another point about this attempted proof: we can’t necessarily assume that for all , merely because contains no singleton set. In a union-closed family such that there exist such that imply , there could be a of size 1.

February 22, 2016 at 5:01 pm

scratch my last comment, it’s wrong. Apologies.

February 21, 2016 at 1:26 am |

Correction: In my previous comment, it should read “Then the element belongs to at least half the sets in iff is not positive” and “It evaluates to iff and for every , and otherwise to .”

February 21, 2016 at 2:40 am

What a brave attempt! To scrutinize this a bit, here are two questions:

1) Are you sure that the Fourier coefficient is equal to the coefficient of in ? I’d think that every odd power of would contribute as well, and more generally every monomial in which occurs with an odd power and all other variables with an even power.

2) What you show in the end is . As far as I understand, this is equal to . So this seems to be trying to show that the average abundance of a uniformly chosen element is at least 1/2, which we know to be false. Am I missing something?

February 21, 2016 at 7:26 pm

1) Well, defines an inner product on the space of real-valued functions on with an (orthonormal) basis given by the set of functions , and Note that . is of the form for some coefficients , so we have . Since and are two linear combinations of the basis vectors that both equal the same vector, we have for all .

2) Of course you’re correct that this proof can’t work for this reason. I think the concrete mistake that let to this false conclusion is the assertion that must not be positive because can’t be positive, overlooking that might very well be negative! It was stupid of me not to notice this. Nevertheless, I’m optimistic that the general approach — using the Horn clause description to arrive at a polynomial expression of the indicator function of , which can then be used to obtain the value of could be useful in proving FUNC.

February 21, 2016 at 11:52 pm

1) Yes, but your original expression for as a product over the polynomials associated to the Horn clauses also contains monomials of higher order. Maybe we’re getting confused because a polynomial like is nonconstant as a polynomial, but constant as a function on the hypercube?

2) Your was defined as . How can this possibly be negative?

I agree that the idea of combining the Horn clause description with Fourier analysis seems very much worth pursuing. But wouldn’t one expect an analytic argument to rely on analytic bounds like Cauchy-Schwarz, hypercontractivity, etc?

February 21, 2016 at 3:49 am |

I’ve been contemplating the fibre bundle construction a bit more, and there’s a small chance that this may turn into a proof strategy. But what I mostly find tantalizing about the following approach is that it’s very close to the search for counterexamples. In the special cases that I’ve looked at so far, one can write down quite concretely which properties a (minimal) counterexample must have, and then one ends up finding reasons for why these properties are difficult to satisfy jointly. However, so far I don’t have any definite results.

The basic idea is to approach the lattice-theoretic formulation of FUNC via induction on the size of . There are two completely independent problems that need to be solved:

1) Given a fibre bundle with projection map , show that if both and all fibres satisfy FUNC, then so does . This is the part that’s especially close to the search for a counterexample. So far, I’ve been looking at the case where is the two-element lattice with , and am currently analyzing the tradeoffs between the abundances of the join-irreducibles in vs those in .

2) Show that every arises as the total space of some nontrivial bundle . Again I’ve looked at the case so far, and it turns out that it’s possible to find such a if and only if has a prime ideal (or, equivalently, a prime filter). While these exist in many large classes of lattices, M3 is the smallest lattice which doesn’t have one.

In general, the problem is this: find a lattice with and a surjective lattice homomorphism (i.e. preserving joins, meets, top and bottom). In other words, find a quotient lattice of . In addition, this must satisfy the following

lifting property: for every in and and , there is with . This is the difficult part.In order to turn this into a possible proof strategy for FUNC, one clearly needs some sort of positive answer to 2), which may be unlikely. But one can still hope to obtain a nice class of lattices for which 2) holds and try to prove FUNC for these.

In case that anyone has any ideas about this, especially part 2), it would be great to know!

February 21, 2016 at 5:49 am

In terms of congruences, Problem 2) is equivalent to the existence of a nontrivial congruence on such that if and , then there exists such that .

A sort of trivial counterexample to 2) is M3 itself: the only nontrivial quotient of this is the Boolean algebra on two generators (power set of a 2-element set), but the quotient map doesn’t have the lifting property. But it would be more satisfactory to have counterexamples to 2) for which FUNC doesn’t hold trivially.

March 9, 2016 at 10:00 am

To wrap this up, let me explain that 2) is indeed far from true, so that this approach cannot be a general proof strategy.

The reason is that there are arbitrarily large finite lattices that are

simple, i.e. do not have any nontrivial lattice quotients. For example, there is a theorem of Ore saying that the partition lattice of a finite set is simple. (See Theorem IV.4.2 in Grätzer’s “General Lattice Theory.)February 21, 2016 at 3:36 pm |

In trying to prove FUNC, is it enough to consider those that contain every set of the form ?

My first thought was that this should be an easy-to-see wlog assumption, but upon further reflection this doesn’t seem clear at all. The property is strictly stronger than assuming that separates points. And just throwing in the missing sets contributes a lot of unwanted abundance to most elements, so this doesn’t look like a wlog procedure.

Since all the nontrivial examples that I remember being discussed here have this property, I wonder whether we should look for some interesting examples that don’t have it.

The reason that I’m asking is that this property translates into atomistic (atomisticity?) in the lattice-theoretic formulation. This is a nice structural property of a lattice that would be convenient to assume.

February 21, 2016 at 5:15 pm |

It doesn’t seem clear that we can make that assumption, since adding those sets to a family increases some abundances.

Is it always the case that a separating union-closed family contains at least one complement of a singleton?

February 21, 2016 at 5:30 pm

Ah yes, I think that last statement follows easily from the separability condition: by taking two distinct elements not in a set of maximal cardinality less than , finding a set that separates them, and taking the union, for a contradiction.

February 21, 2016 at 5:31 pm

Yes; any inclusionwise maximal proper set A in such a family is complement of a singleton. If (at least) two elements x,y are missing, using separating property we can find a set in our family containing just one of them; the union of this set with A is larger proper set;

a contradiction.

February 22, 2016 at 12:03 am

In fact we can assume that at least two of the sets X\{x} are in \mathcal{A}. The only way that there can be only one set X\{x} in \mathcal{A} is if that particular x is in only one set, namely the full set X. This needlessly increases the abundance of all other elements.

February 22, 2016 at 3:26 am |

Let me mention a question that maybe I asked before or is trivial or both. Let F be a union closed family what is the minimal hamming distance from F to a monotone (up) family? There is a lower bound on this quantity in terms of “directed” influences which are closely related to the quantities we look for. Directed influences are the measure of sets in F not containing x such that adding x takes you away from F. So we can also ask: what is the smallest directed influence a union closed family must have?

February 22, 2016 at 9:02 am |

Fifteen hours ago, I put the following question on mathoverflow : would it be reasonable to conjecture that there is a real constant c>1/2 (perhaps c could be taken equal to 2/3) such that, for every natural number n, for every union closed family of n distinct finite sets with at least two elements in their union U, there is at least one subset {a,b} of U, with two elements, that intersects at least cn−1 sets of the family ?

It’s here :https://mathoverflow.net/questions/231769/a-generalization-of-frankls-conjecture

I got no definite answer, but it was suggested that I put the question here. In fact, I am not at the level of this Polymath, but since my question was not dismissed on Mathoverflow, perhaps it is not too presumptuous to put it here.

February 22, 2016 at 9:58 am

This reminds me of a generalization that Thomas Bloom suggested, and that has not as far as I know been disproved, namely that for every there is a subset of size that is contained in at least sets in the family.

A natural variant (somewhat generalizing yours) would be that for every there is a subset of size that intersects at least sets in the family.

February 22, 2016 at 1:24 pm |

It easily follows from the union-closed sets conjecture, and existence of a subset of size k

that intersects at least sets can be shown by induction on k.

For k=1 the conjecture allows to find an element x such that at most n/2 sets are “bad”

and not containing x. But the family of these “bad” sets is another union-closed family

and we can find an element y such that at most half of “bad” sets are “bad-bad”, and not contain y. So at most sets contain neither x, nor y; we can certainly continue

by induction for every integers k (restriction from above suggested by Alec is not needed.

February 22, 2016 at 1:35 pm

Good point!

February 22, 2016 at 2:02 pm

But after at most steps, the set of picked points will dominate all sets of our family and the set of “bad-bad-…-bad” points

will be already empty; using induction further would be rather vacuous.

March 27, 2016 at 4:13 am |

Gil has ask a question that there is always

an injection from to such that each set

in is a subset of its image. I have proof this conjecture so I’ll publis this paper soon.

June 8, 2016 at 10:41 pm |

let be a set of subset of finite set , and let be an odd integer.

we call a subset of constituted by the abundant elements of in . By “abundant elements” I mean the element of that are in at least half the elements of .

Let be closed for the union,

conjecture stronger than FUNC:

April 26, 2018 at 8:53 am |

[…] Safe Steroids Store Safe Steroids Store Safe Steroids Store Safe Steroids Store Safe Steroids Store Safe Steroids Store Safe Steroids Store Safe Steroids Store Safe Steroids Store Safe Steroids Store Safe Steroids Store […]