https://arxiv.org/abs/2105.14912 ]]>

See [this question on math.stackexchange](https://math.stackexchange.com/q/3646721/573047).

However, experimentally, 95% of families with universe of at most 3 elements have that property, but this drops already to 69% for families with universe of at most 5 elements. ]]>

This paper is hard to read because it uses language not like in the literature (Irreducible elements in a union-closed family has been defined in many papers already). Seems like the author needs to be more careful when reading other papers. For instance, “According to Poonen [1], the union-closed conjecture is false for some Ω(union-closed) satisfying |U(Ω)| = ∞. Therefore, the conjecture is investigated only for Ω satisfying |U(Ω)| < ∞." is exactly the opposite of what Poonen proves! He shows that the case |U(Ω)| = ∞ can be reduced to the case |U(Ω)| < ∞.

]]>]]>

My goal was to avoid 0-rows in the submatrices. But I guess it does not work. If we change the 0s into 1s where the minor and major row differ, and allow identical columns — this can not help for the base case, only hurt, in the only place where the distinctness of columns assumption is used. But unfortunately this change may turn a counterexample for the strengthened hypothesis into a non-counterexample (even if this is not the case for the original hypothesis).

]]>If one row ‘majorizes’ the other, i.e. one element belongs to all the sets that the other one belongs to, then the lower Hamming weight row can always be deleted — this element is not needed in computing the maximum of densities #A_i / #A where A_i is the subfamily consisting of all the sets containing the element i. In other words, if the original matrix is a counterexample to the original non-strengthened conjecture, so has to be the new matrix, and vice versa. However, with the strengthened conjecture this relationship does not seem to hold, as the density of the whole matrix may go up. Perhaps this means the row of lower Hamming weight has to be taken with weight 0 into the distribution?

]]>Apparenty the above strengthening is false: see this comment.

]]>Right, my initial ultra-weak version of Frankl is that every union-closed family has nonnegative correlation with maximal intersecting family. (I am not entirely sure how and if you can weaken it further in an interesting way giving up the “maximal” condition.”

]]>Dear Gil, Ok it is as I thought, so my previous comment applies. The problem is that if is too small, then it is enough for in order to get nonnegative correlation. It seems this kind of question is interesting only when is on the order of , as is the case with the dictator functions.

]]>Or better written

]]>Dear Igor, I mean .

]]>Ah yes, thanks for the correction, and the far more efficient way of tackling the calculation. (Also thanks for the Wolfram Language lesson, most instructive.)

]]>Tim: That sounds reasonable. I’ll try it out when I next have some time…

]]>Regarding correlation between , could you define what you mean? If I take it to mean the correlation between the corresponding boolean functions and let , then when is small relative to , we trivially have nonnegative correlation (just because most sets are in neither nor :

Since the largest set cannot rise, we always have , and from this it follows that the correlation will be nonnegative for . Moreover, this is the regime we really care about for weak FUNC since for we have by Reimer’s result that the average set size is at least and hence some element appears in at least sets.

]]>Rik,

The families created aren’t necessarily union-closed. The code only takes unions of _pairs_ of sampled sets. (It also discards the original sample.)

The following seems to work correctly. For efficiency, I’ve used bistrings (binary integers) to represent the sets.

Manipulate[

randomSampleInts = RandomInteger[2^m – 1, r];

ucFamilyInts =

Nest[DeleteDuplicates[BitOr @@@ Subsets[#, {1, 2}]] &,

randomSampleInts, IntegerLength[r, 2]];

ucFamilyBits = IntegerDigits[#, 2, m] & /@ ucFamilyInts;

elementCounts = Total@ucFamilyBits;

Column[{

Row[{"Base set is ", Range@m}],

Row[{"Cardinality of union closed family of subsets is ",

Length[ucFamilyInts]}],

Row[{"Counts are ", elementCounts}],

Row[{"Ordered freqs are ",

ListPlot[elementCounts, Filling -> Axis, ImageSize -> {400, 200},

PlotRange -> {0, Length[ucFamilyInts]},

GridLines -> {{}, {Length[ucFamilyInts],

Length[ucFamilyInts]/2}}]}],

Row[{"Union closed family is ",

Sort[Join @@ Position[#, 1] & /@ ucFamilyBits]}]

}]

, {{m, 9, "Base set cardinality"}, 0, 19, 1}

, {{r, 5, "Random sample size"}, 0, 100, 1}]

Here’s a naive suggestion that may well be complete nonsense as I haven’t thought about it properly. One could generate randomly by picking a few random sets (according to a distribution that looks sensible) and taking the upward closure. Then one could iteratively choose sets in G and try to remove elements from them, keeping going until it is no longer possible to remove any elements without violating the conditions that F and G must satisfy. (To be clear, when one removes an element from a set, the resulting set is the new set that corresponds to the one you removed an element from, which in turn corresponds, after some chain of removals, to a set in G.) In general I see no reason why the set F you get at the end should have an up-compression to G.

]]>I’ve tested this on a few million random examples with the size of the ground set and the size of and satisfying and , where has no abundant elements and is the result of up-shifting applied to ; and found none that satisfy the condition. Seems promising. It would be interesting to investigate examples where is not the result of up-shifting applied to , but I don’t see an easy way to find those.

]]>Yes, I am also assuming that . Sorry for not saying it earlier. Also note that there are valid but such that is not the result of up-shifting applied to .

]]>