科学网

 找回密码
  注册

tag 标签: 计算复杂性

相关帖子

版块 作者 回复/查看 最后发表

没有相关内容

相关日志

密码学基础——书单
OneDay777 2018-2-27 11:05
概率论和计算复杂性理论的基础知识是密码学所必须的,以下是一些经典的书籍。 概率论:W. Feller. 《An Introduction to Probability Theory and Its Applications》 中文版当当网链接 算法复杂性理论: 1、《Computers and Intractability: A Guide to the Theory of NP-Completeness 》 https://dl.acm.org/citation.cfm?id=578533 2、《Introduction to the Theory of Computation》 随机化计算: 1、《Modern Cryptography, Probabilistic Proofs and Pseudorandomness》 https://www.springer.com/us/book/9783540647669 2、 《 Randomized Algorithms 》
个人分类: 密码学基础|2861 次阅读|0 个评论
矩阵乘法需要O(n^3)的时间,不能再减少
热度 4 zlyang 2013-9-18 09:52
矩阵乘法需要 O(n^3) 的时间,不能再减少 牛津大学计算中心主任、英国皇家学会院士、美国工业与应用数学会( SIAM )主席 Nick Trfethen 教授 2012 年 11 月的《 SIAM News 》上撰文链接内容以短小篇幅简明地阐述了数值线性代数方向两个未解之谜: 1. 如何快速求逆矩阵 2. 高斯消元法在实际应用中的稳定性。 第一个问题的原文如下: The first problem is, can an n × n linear system of equations Ax = b be solved in O ( n 2+ ε ) operations, where ε is arbitrarily small? In other words, could there be a “fast matrix inverse,” like the fast Fourier transform? 实际上,矩阵乘法的计算复杂性下限问题早就被我国学者解决。本人的看法如下: 对于两个 n 阶 方阵,假如所有这些元素之间是“相互独立”的,则定义所包含的 n 个乘法的结果也是 “相互独立”的。即 n 3 的乘法是必须的。由于加法形成lg n 的进位位, “相互独立元素”的方阵乘法,必须的信息量是 O ( n ^3 × lg n ) 。换言之, 随机有理方阵乘法的信息量下界: O ( n ^3 × lg n ) ,不存在比它更小的精确计算方法。 现有所谓 O ( n 2+ ε ) 的矩阵乘法,都是以降低计算精度(误差增大)为代价的。这些复杂性降低的核心技术:把多个乘法合并在一起计算。对于有限位数的数字计算机,这必然导致误差的增大。 1990 年陈道琦、谢友才、应文隆在科学通报发表的《关于矩阵乘法的一个最佳算法》,只用一次乘法。该文对我的发现具有直接的启发。 参考文献: Nick Trefethen. The Smart Money’s on Numerical Analysts . SIAM News, Volume 45, Number 9, November 2012. 美国 SIAM 主席、皇家学会院士提的两个数值代数问题( 1 ) . http://www.mysanco.com/wenda/index.php?class=discussaction=question_itemquestionid=1097 陈道琦,谢友才,应文隆 . 关于矩阵乘法的一个最佳算法 . 科学通报 , 1990, 35(3 ): 161-161. 后记: (1) 听说“矩阵乘法”和“矩阵求逆”不是一个问题,还真不清楚是怎么回事。高斯消元法?克莱姆?逆矩阵?还有别的吗?“2. 高斯消元法在实际应用中的稳定性”和“矩阵求逆”的关系是什么? (2) “高斯消元法在实际应用中的稳定性”应该是除法里分母的绝对值比0明显大。数值计算稳定性的核心成因包括:(1)计算使用有限的字长(有效数字位数),(2)除法里分母接近0(涉及矩阵时表现为矩阵的奇异性)。小绝对值的分母(特别是无理数分母)导致较大(亦即较长)的分式计算结果。当有效数字位数有限时,造成截断位数引起的误差,这是数值计算不稳定的核心原因。一般加法、减法的结果的有效数字位数变化慢(相对于乘法、除法),所以乘法、除法计算引起的有效数字位数的快速增加(结合有限的有效数字位数计算),造成了数值计算的不稳定性。 (3) 当然,这并不是说所有的数值计算稳定性都来源于“有效数字位数”的丢失。至少目前不能肯定这点(有效数字位数的丢失)。 ______________ 相关资料 ______________ Weisstein, Eric W. Strassen Formulas. From MathWorld --A Wolfram Web Resource. http://mathworld.wolfram.com/StrassenFormulas.html Unfortunately, Strassen's algorithm is not numerically well-behaved. It is only weakly stable, i.e., the computed result satisfies the inequality where u is the unit roundoff error, while the corresponding strong stability inequality (obtained by replacing matrix norms with absolute values of the matrix elements) does not hold. 相关链接: 俗解Chaitin定理 http://bbs.sciencenet.cn/home.php?mod=spaceuid=107667do=blogid=478066 数学证明的长度:与公理系统能力负相关 http://bbs.sciencenet.cn/blog-107667-729907.html
个人分类: 基础数学-逻辑-物理|25458 次阅读|26 个评论
P = NP 的一个简单论证
热度 2 liuchao666 2013-1-1 13:42
P = NP 的一个简单论证 各位 XDJM 、 FLXQ …… 大家 都辛苦了。 好的开始是成功的一半。 预祝各位新年新气象,为那个不愿意有,最终也确实好像没有到来的末日 2012 画个舒心的句号吧。 闲言少叙,书归正传。 P,NP 一直是计算领域的麻烦事,好多人都相信 P ≠NP ,典型的西方逻辑思维,批判多次了,不想再说了。 现在以 SAT 问题为例,讨论一下 P = NP 问题 。 SAT 问题也称为合取范式的可满足问题,一个合取范式形如下: A 1 ∧ A 2 ∧ … ∧ A n , 其中,子句 A i ( 1≤i≤n )形如: a 1 ∨ a 2 ∨ … ∨ a k , 其中, a i 称为文字,为某一布尔变量或该布尔变量的非。 SAT 问题是指:是否存在一组对所有布尔变量的赋值( TRUE 或 FALSE ),使得整个合取范式取值为真。 论证如下: 首先给出极大项的概念,含有 k 个布尔变量的简单析取式,若任意一个布尔变量的它的否定不同时出现,而两者之一必出现且仅出现一次,且第 i 个变量或它的否定出现在从左起的第 i 位(这里假定布尔变量有一个预定的排序方案),称这样的简单析取式为极大项。 易知, k 个变量的可以产生 P(P=2 k ) 个不同的极大项 , 其中 , 每个极大项有且仅有一个成假赋值,现在把第 i 个极大项的成假赋值 对应的二进制数记作 M i 。 现在构造一个映射函数 F ,使得每个极大项对应有一个函数值,表达式为: F(i) = ~ 2 (M i ) ; 根据映射函数 F , 可以解出 k 个变量的函数值 G(i) 如下: G(i) 的二进制表达式是一个长度为 P 循环字符串 T(m) ,该字符串的循环节是 2 (k+1-i) , 在每个循环节内,有如下表达式: ① 若 0 ≤ m < 2 (k-i) , T(m) = 0; ② 若 P > m ≥ 2 (k-i) , T(m) = 1; 至此, 把 G(i) 代入任意一个合取范式 A 1 ∧ A 2 ∧ … ∧ A n , 如果结果为 0 ,则合取范式不可满足; 若结果非 0 ,则合取范式是可满足的。 现在,已经把合取范式的判断问题转换为一个简单的数值计算问题。 注意, 我们实际上并不需要把每个 G(i) 代入合取范式计算(当然,在 k 比较小的时候,这也是一个比较快速的算法)。 我们真正需要的是 T(m) 的表达式的计算问题, 即对于析取子句 A i ,先计算 A i 的值为 1 的表达式 L(A i ) ; 然后, 对于整个合取范式,我们再判断 L(A i ) = 1 是否存在重叠位即可。 好了,终于摆脱了搜索的困境 ####################################### 以三个变量 a 1 ,a 2 ,a 3 为例,函数赋值如下: 公式 M i F(i) a 1 ⋁ a 2 ⋁ a 3 0 11111110 a 1 ⋁ a 2 ⋁⇁ a 3 1 11111101 a 1 ⋁⇁ a 2 ⋁ a 3 2 11111011 a 1 ⋁⇁ a 2 ⋁⇁ a 3 3 11110111 ⇁ a 1 ⋁ a 2 ⋁ a 3 7 11101111 ⇁ a 1 ⋁ a 2 ⋁⇁ a 3 5 11011111 ⇁ a 1 ⋁⇁ a 2 ⋁ a 3 6 10111111 ⇁ a 1 ⋁⇁ a 2 ⋁⇁ a 3 7 01111111 其中每个变量的函数值如下: 公式 G(i) a 1 11110000 a 2 11001100 a 3 10101010 ⇁ a 1 00001111 ⇁ a 2 00110011 ⇁ a 3 01010101 更加严格的叙述(或者说,如果还需要的话),改天再整理一下,欢迎砖头…… ……饿呀,先吃饭去了。
4493 次阅读|4 个评论
Hamilton 环多项式算法、时间复杂度及其详细证明
热度 9 dulizhi95 2012-8-17 07:03
Hamilton 环多项式算法、 时间复杂度及其详细证明 本文已在 arxiv 上公布。相对于以前,本次算法略有调整,但证明方法是全新的。以前的证明方法当然是逻辑严密的高超的和有效的,但需要 ”combine and split” ,这个非常难理解,也非常难实现,所以这次我换了一种全新的方法,实现起来要容易得多。当然,还是属相当难看懂的。能透彻看懂本算法及其证明的人,绝对在计算机专业算法方向实力极其强大!总之,本次的证明比我以前的 ”combine and split” 方法要容易理解得多。 说明: 1, 本文的一些表达,是直接借用了某审稿专家的表达方式,这是一个非常不错的审稿专家,在此表示由衷的感谢!明显看出了英语非我的母语,我英语不行,他将我文中许多不规范的英文表达,重新帮我表达了一次。当然,他说他还是有不理解的地方,并详细指出了哪些地方的意思他不理解,当然,我据此作了根本性修改。 2, 本文希望同行高人给予批评指导,交流合作。 3, 希望有专长的博主给予英语表达和 SCI 写作方面的纠正指点或合作。 Algorithm Definitions: For an undirected graph G with n vertices, let x_1, x_2, ... ,x_n denote the n vertices. A broad cycle is defined as a cyclic sequence x_1, x_2, ... ,x_n, x_1 of the vertices of G where every pair x_i, x_{i+1} may or may not be adjacent in G. We call a pair (x_i, x_{i+1}) (including (x_n, x_1)) of non-adjacent vertices a break point. So the number of break points is between 0 and n for a broad cycle. Apparently, a break point is constituted by two vertices(say vertex a and b). We use a*b to denote this break point. If two consecutive vertices a and b are adjacent, we call them “connecting point”, we use ab to denote this connecting point. We use a…b to denote that there are some vertices between a and b. For an undirected graph with n vertices, the number of all break points and connecting points is n(n-1)/2. A connecting point or a break point is also called “a general point”. A general point(a break point or a connecting point) is constituted by two vertices, we say that the general point contains these two vertices. A segment: any number of vertices sit side by side to shape a sequence, we call this sequence a “segment”. So, for a broad cycle, any part between two general points is a segment, and there are n(n-1) different segments for this broad cycle. For each general point, we construct a broad cycle which contains this general point. We call the broad cycle is the general point’s broad cycle, and the general point is the broad cycle’s general point. Because for an n vertices graph, there are n(n-1)/2 general points, we must construct n(n-1)/2 broad cycles(some of them may repeat). For a broad cycle, cut a segment at any place of this broad cycle, insert the segment between two consecutive vertices in some other place of the broad cycle. Before inserting, we may do the rotation and extension on the segment and then insert it. Note: for this rotation or extension, it is not necessary that the two end vertices are adjacent. We call this operation a “cut and insert” . The main operation in our algorithm is the “cut and insert”, now we explain it: Let x_1, ... ,x_n, x_1 denote a broad cycle, let s = x_i, x_{i+1}, ... ,x_{i+r} be a subsequence of this broad cycle(i.e., a segment), and let j be an index such that x_j and x_{j+1} are not in s. A broad cycle C is obtained by cutting s and inserting it into x_j and x_{j+1} if either C = x_j, x_i, x_{i+1}, ... ,x_{i+r}, x_{j+1}, x_{j+2}, ... ,x_n, x_1, ... ,x_{j-1}, x_{j}, or C = x_{j}, x_{i+r}, x_{i+r-1}, ... ,x_{i}, x_{j+1}, x_{j+2}, ..., x_n, x_1, ... ,x_{j-1}, x_{j} (addition is meant modulo n). Also, before inserting, we may do the rotation and extension on the segment and then insert it. Algorithm FindHCycle: For an undirected graph with n vertices and m edges, we first delete all the edges, then we get a graph with n vertices and no edges. This graph has n(n-1)/2 break points. For each break point(or connecting point), we construct a broad cycle which contains this break point(or connecting point). So, we construct n(n-1)/2 broad cycles(some of them may repeat). For each broad cycle, we try to get the least number of break points under the current edges. Apparently, at first, when no edges, all our n(n-1)/2 broad cycles fulfil the condition(least number of break points). Then, our algorithm includes m steps, we call each of the m steps “a big step”. At each big step, we add one edge. We call it “the current added edg e ”. Note, before adding this edge, all broad cycles are with the least number of break points, but after adding the edge, some broad cycles may not. If so, we try all the broad cycles for all possible one “cut and insert”, at least one “cut and insert” can get a new broad cycle which has the least number of break points(its old one does not, i.e., its break points number minus one). Apparently, this new broad cycle contains the new added edge. We let the new broad cycle replace its old broad cycle whose general point is the same as the new broad cycle’s. We call this a “little step”, each little step contains a successful cut and insert and a replacement. At each big step, we call all the broad cycles before adding the edge “old broad cycles”. If after the added edge, some general point’s broad cycle’s break points should minus one, if we can get this broad cycle with the least number of break points, we call it this general point’s “new broad cycle”. If a general point’s broad cycle, no matter how to arrange it, can always get a new broad cycle by a cut and insert, we call this general point “key general point”, call this broad cycle “key broad cycle”, and call this cut and insert “a useful cut and insert”. We call that the key broad cycle contains this useful cut and insert, call the key general point the useful cut and insert’s key general point, call the new broad cycle this useful cut and insert’s new broad cycle, call the new broad cycle’s general point the useful cut and insert’s new key general point. At each big step, when there will be some new broad cycles, we always can get at least one useful cut and insert to produce a new broad cycle. Then we use the new broad cycle to replace its old one. Then we can continue this process until there will be no new broad cycles. So continue to try the all possible one “cut and insert”, i.e., continue to finish the rest little steps, until we cannot get any broad cycle’s break points become less. This means all the broad cycles fulfil the condition(with the least number of break points) after adding the new edge. Go to the next big step, i.e., add another edge, do the same job as above. Until all the m edges have been added and finished. After all the m big steps, we can get all the n(n-1)/2 broad cycles with the least number of break points, of course, each broad cycle at least contains one different break point or one different connecting point. So, if there are some Hamilton Cycles, we can get at least one. Time Complexity: at each step, for one broad cycle, when we cut a segment, try to insert it between two consecutive vertices at any place of the rest part of this broad cycle, there are O(n) different points to be inserted, after the insert, we need to compare to decide if the result is a new broad cycle which contains a general point with the least number of break points under the new added edge(its old one does not, i.e., its break points number minus one). The comparison needs O(1) time(by memory). For each broad cycle, there are O(n 2 ) segments, for each segment, the rotation and extension need O(n) time, at each step there are O(n 2 ) broad cycles(even the new broad cycle replaces the old one, all possible new broad cycles are within O(n 2 )), also there are m big steps, so, all the time complexity is O(n 2 )* O(n 2 )* O(n)*O(n)* O(1)*m = O(n 6 )*m. Too long, right? But this is the worst case or the theoretical case, also it can get all the n(n-1)/2 broad cycles. In fact, by a lot of test, if an undirected graph contains at least one Hamilton cycle, our algorithm’s average time complexity for getting one Hamilton cycle is only about O(n 3 ). Some of the time in the O(n 6 )*m may not be necessary. For example, trying all the O(n 2 ) broad cycles is not necessary, we can make good choice. So is the O(n 2 ) segments. How to prove the algorithm? Our way is: to check every undirected graphs to find if it fulfil our algorithm and all the properties. But how to check “every”? we can see that for each cut and insert, only several vertices take action, other vertices do not. We can delete most vertices and only study the rest part with several vertices being left. First, we explain how to delete a vertex. 1) The deleting rule is: before deleting a vertex, all the broad cycles are with the least number of break points, after deleting, this property should hold and we do not need to rearrange all the broad cycles. 2) For the …abc…, after deleting vertex b, it becomes ac, even if vertex a is not adjacent to c, at this time, at this place, we assume there is a temporary edge between vertex a and c. 3) For the …a*b*c…, after deleting vertex b, it becomes a*c, even if vertex a is adjacent to c, at this time, at this place, we assume vertex a is not adjacent to c. 4) For the …a*bc…, after deleting vertex b, it becomes a*c, even if vertex a is adjacent to c, at this time, at this place, we assume vertex a is not adjacent to c. 5) If there are some contradictions, using this rule to solve the contradictions: some edges cannot occur at the same broad cycles at the same time. For example, for the …abc… and …dbe.., vertex a and c are not adjacent, nor d and e, after deleting vertex b, we get new edges ac and de, but these two edges cannot occur at the same broad cycles at the same time. Now we decide the vertex number for our testing graphs. When we do the “cut and insert”, if we do the rotation and extension on a segment, the segment needs at least 4 vertices, for the rest of the broad cycle, we need at least one new general point to insert the segment, so the rest of the broad cycle should contain at least 4 vertices. Both together 8 vertices. Also we should keep this broad cycle’s general point, 2 vertices. We should keep the current added edge, 2 vertices. So we must test all graphs with at least 12 vertices. Then we should add a vertex to each 12-vertex graph to see if the added vertex can destroy the algorithm. So we still must test all graphs with 13 vertices. Note: each graph is not only a simple graph, but is the rest part of a big graph after being deleted many vertices. In this graph, some edges cannot occur in the same broad cycle at the same time, we call this “a limitation”. For the test graphs, we should consider all possible combinations of the limitations. How to decide the limitations? At each big step, for the old broad cycles, we can consider any limitations of no contradictions, but no limitation on the current added edge, i.e., this edge is an absolute real edge. Also, when we do the cut and insert, we do not need to add any new limitations, because the added vertex can produce new limitations. Note: adding a vertex is the reverse operation of deleting a vertex. If after we add a vertex to the graph, a former useful cut and insert is no longer a useful cut and insert, we call that the added vertex destroys the useful cut and insert. We mainly have the following six kinds of destroys: Destroy 1: if the useful cut and insert is like this: we insert a…b into …fg…cd…, vertex a is adjacent to c, vertex b is adjacent to d, but when we add a vertex x, …fg…cxd…, if x is also adjacent to a or b, we still can insert a…b… between vertex cx or xd. At this time, we say vertex x does not destroy the useful cut and insert, If x is not adjacent to a and b, we say vertex x destroys the useful cut and insert. Destroy 2: if the useful cut and insert is like this: we insert a…b into …fg…cd…, vertex a is adjacent to c, vertex b is adjacent to d, but when we add a vertex x, …fxg…cd…, but now vertex a is not adjacent to c, only axc , thus, we say vertex x destroys the useful cut and insert. Destroy 3: if the useful cut and insert is like this: we insert a…b into …fg…cd…, vertex a is adjacent to c, vertex b may be adjacent to d or may not, after inserting, we get …fg…ca…b*d…, this is the general point b*d’s new broad cycle. When we add a vertex x, …fg…cxd…, we say vertex x destroys the useful cut and insert. But if this time we still can insert a…b between cx to get a new broad cycle of the general point b*x, at this time, we still say vertex x destroys the useful cut and insert. But it produce a new useful cut and insert. Destroy 4: usually one vertex destroys one useful cut and insert, but only in next case each of the two vertices can destroy a useful cut and insert: we insert a…b into …cdef…, vertex a is adjacent to d, vertex b is adjacent to f, vertex e is adjacent to c, thus, we say each of the vertices d and e destroys the useful cut and insert. Destroy 5: if the useful cut and insert is like this: we insert a…b into …f*g…cd…, vertex a is adjacent to c, vertex b is adjacent to d, but when we add a vertex x, …f*x*g…cd…, we still can insert a…b into cd, but this time the result is not with the least number of break points. So we say vertex x destroys the useful cut and insert. Destroy 6: if vertex a and b(or more) together destroy a useful cut and insert, but deleting any one of them cannot get this useful cut and insert, only deleting both of them we can get this useful cut and insert, then we do not think vertex a or b destroys this useful cut and insert. But, if after deleting vertex a, vertex b becomes destroying. In fact, the destroy 6 is not a destroy, we only give an explanation for this case. Note: only these six kinds of destroys, no others, because the test graph is the rest part of some big graph being deleted some vertices. Note: a destroy means: there is a real new broad cycle for this destroy. If there is no such a new broad cycle for this “destroy”, it is not a destroy. For a useful cut and insert, we call the broad cycle, which can produce a new broad cycle by this useful cut and insert, a parent broad cycle, call its general point a parent general point, call this new broad cycle a child broad cycle, call the child broad cycle’s general point a child general point. If a vertex destroys some useful cut and insert, we call it a destroying vertex. If a vertex does not destroy any useful cut and insert, we call it an un-destroying vertex. Our main proof idea is: we check that any graph with 12 or 13 or 14 vertices at each of its little steps has at least one useful cut and insert. Also, we check that for any graph with 12 or 13 or 14 vertices at each of its little steps, its un-destroying vertices is at least 2 more than its destroying vertices. Then we prove that any graph with more than 14 vertices also has this property by deduction on vertex number. So, for a 15-vertex graph, if we delete an un-destroying vertex, it becomes a 14-vertex graph, because each 14-vertex graph always has at least one useful cut and insert, so does the 15-vertex graph, and so on. Now we explain a very important concept “main child general point”. The main child general point has the following properties(we can check them): 1) For each 12-vertex graph at each of its little steps, the least number of break points of any broad cycle of its one main child general point must decrease by 1 after adding the current added edge. 2) Each 12-vertex graph at each of its little steps, has at least one main child general point and also has at least one useful cut and insert for this main child general point unless it finish all the little steps of the current big step. 3) Each 13-vertex graph at each of its little steps, has at least one main child general point and also has at least one useful cut and insert for this main child general point. When we delete any one vertex except the two vertices of the main child general point, the rest 12-vertex graph still has the same main child general point and also has at least one useful cut and insert for this main child general point. 4) For each 13-vertex graph at each of its little steps, let vertex a, b, c are three distinct vertices in this graph which do not belong to the two vertices of the main child general point(say g). We want to add one vertex d into this graph. If either we delete vertex a, add vertex d, or we delete b, add d, both cases the g is the 13-vertex graph’s main child general point, then, for these 14 vertices, deleting any one vertex except the two of the main child general point, g is always the rest 13-vertex graph’s main child general point. We call g the 14-vertex graph’s main child general point. If either we delete vertex a, add vertex d, or we delete b, add d, both cases the 13-vertex graph’s main child general point is (d,c), then, for these 14 vertices, deleting any one vertex except the two of the main child general point, (d,c) is always the rest 13-vertex graph’s main child general point. If we delete a, add d, g or (d,c) is the main child general point, but if we delete b, add d, g or (d,c) is not the main child general point, then, for these 14 vertices, deleting any one vertex except the two of the main child general point (d,a), (d,a) is always the rest 13-vertex graph’s main child general point. At this time, we call (d,a) the 14-vertex graph’s main child general point. 5) From 4), we can get the corollary: for any graph with more than 13 vertices, it also has the property 4). 6) For each 12 or 13 or 14 vertices graph, at each of its little steps, for one main child general point, the number of un-destroying vertices is at least 2 bigger than the number of destroying vertices. Also we check any 14-vertex graph has at least one useful cut and insert for the main child general point at any little step. Note: this destroying only is for such useful cut and inserts: they produce the new broad cycles whose general point is the main child general point. So is the un-destroying. Now we prove any graph with more than 14 vertices also has property 6). We prove this by deduction on vertex number. Proof: we suppose any k-vertex graph has property 6) (k=14), for a k+1-vertex graph, let vertices (a,b) is one main child general point at its one little step, let d is the number of its destroying vertices, let u is the number of its un-destroying vertices, both for this main child general point. If u is at least 2 bigger than d, it is ok. If not, we first suppose d=u. At this time, the worst case is: for each vertex of u, its all possible parent general point(i.e. added any one of the other vertices except the two of the main child general point, i.e., for all the possible parent broad cycles whose general points contain this vertex of u), are destroyed by 2 vertices of d. So, if we delete one vertex of u, the number of destroying vertices is d-2, the number of un-destroying vertices is u+1, i.e., we move 2 vertices from d to u. But we can delete these two vertices again, then, the number of destroying vertices is d-2(may be more, because for the case Destroy 6, deleting one vertex may let a vertex become a destroying vertex, but this does not affect our proof.), the number of un-destroying vertices is u-1, u-1-(d-2)=1, i.e., for the k-2-vertex graph, the number of un-destroying vertices is 1 bigger than the number of destroying vertices. A contradiction. We then suppose u=d+1. At this time, the worst case is: at least for one vertex of u, its all possible parent general point(i.e. added any one of the other vertices except the two of the main child general point), are destroyed by only 1 vertex of d. So, if we delete this vertex, the number of destroying vertices is d-1, the number of un-destroying vertices is u, i.e., we move 1 vertex from d to u. But we can delete this vertex again, then, the number of destroying vertices is d-1(may be more), the number of un-destroying vertices is u-1, u-1-(d-1)=1, i.e., for the (k-1)-vertex graph, the number of un-destroying vertices is 1 bigger than the number of destroying vertices. Also a contradiction. Other cases also have the same way contradictions. So we proved any graph with more than 14 vertices also has property 6), i.e. has a lot of vertices which do not destroy any useful cut and insert for the main child general point. Because any graph with 14 vertices has at least one useful cut and insert for the main child general point at each of its little steps, so does any graph with more than 14 vertices.
5540 次阅读|45 个评论
通俗详细地讲解什么是P和NP问题
热度 3 dulizhi95 2012-1-25 11:16
通俗详细地讲解什么是P和NP问题 上篇博文谈及NP,有博友认为我文中没有解释NP的概念,是不妥的,这次对P和NP进行详细的讲解,使得非计算机专业的理工类也能看懂,计算机专业的当然就更能透彻地看懂了。 要计算或解决一个问题,该问题通常有一个大小规模,用n表示。例如,若分析计算一个二进制数,该数有多少位,这个位就是其大小规模。再比如,从n个数里面找出最大的那个数,这个n就是该问题的规模大小。怎么找?我们要比较n-1次才能得到结果,这个n-1就是所花的时间,也就是时间复杂度。再比如,将n个数按从大至小排序,n是其规模大小,若是我们按这样的方法:第一次从n个数里找最大,第二次从n-1个数里找最大,以此类推,需要的比较次数就是n(n-1)/2,称我们所用的方法为算法,称n(n-1)/2为该算法的时间复杂度。对于时间复杂度,当n足够大时,我们只注重最高次方的那一项,其他各项可以忽略,另外,其常数系数也不重要,所以,n(n-1)/2我们只重视n的平方这一项了,记为O(n的平方),这就是该算法对该问题的时间复杂度的专业表示。 所有形如a*n^k+b*n^(k-1)+c*n^(k-2)……都可记为O(n^k), n^k表示n的k次方,*为乘号,这样的复杂度称为多项式时间复杂度,若是时间复杂度形如k^n,k为大于1的常数,或n!,或更大的,就称为指数型时间复杂度。显然,当n足够大时,指数型时间比多项式要大得多的多。 所有能用多项式时间算法计算得到结果的问题,称为多项式问题,也就是P,所有绝对不可能用多项式时间求解的问题,称为指数型问题。 有这样一类问题,假使你得到了问题的解,我要验证你的解是否正确,我验证所花的时间是多项式,至于求解本身所花的时间是否是多项式我不管,可能有多项式算法,可能没有,也可能是不知道,这类问题称为NP问题。 NP概念的奥妙在于,它躲开了求解到底需要多少时间这样的问题,而仅仅只是强调验证需要多少时间,从而为P与NP这一千年难题的产生埋下了伏笔。显然,P肯定是NP,因为你既然能用多项式求解,就肯定能用多项式验证(难不成我再算一遍!),但NP是否是P谁也确定不了。另外,目前已经很明确的指数型问题也肯定不是NP。当然要理解这一点还有点难,看我下面的解释。 讲到目前为止,NP的概念还是抽象的,听者恐怕还没有达到透彻掌握的程度。要是我给学生讲课的话,对这样抽象难理解的概念,我会告诉学生用如下方式去透彻的掌握它(呵呵,但愿我不要是那种自己仅停留在背概念的水平,甚至连概念都没有理解透,却要厚脸皮地自吹自己“讲课水平高”的人了,我的逻辑非常简单,我自己曾经用这样的方法高效透彻地掌握了,我让学生按这样的方法去做,他们也能达到高效而透彻): 1,找三个例子,一个为P,一个为NP但不知道是否为P,再一个为明显的指数型问题比如全排列、汉诺塔等,反复比较理解。 2,尽可能多的找出NP问题的特点,哪怕有重叠也没关系,针对每一个特点,结合例子去理解。NP最显著的特点有两个,一个是分步及每一步的并行性,二就是任何NP问题都可转化为是和否的问题,当然,最根本的还有验证多项式。 当然我也会详细的总结出特点,细细的准备相应的例子,但这里就没有必要花这个时间笔墨了。 许多人由于没有理解P和NP的实质,仅仅只是背了一下其关于确定性和非确定性图灵机、语言、空间等概念,又没有理解透,将概念进行无谓的倒腾,以为仅凭这些概念的文字表述所指的范畴,就能得出P不等于NP的结论,莫非这些人真的以为粑粑不要米做?
23337 次阅读|3 个评论
证明不可满足
热度 3 haijunzhouitp 2011-5-5 17:16
3-SAT问题很简单,表达起来。3-SAT问题可以很难,尤其是想证明一个公式不可能满足的话。 说起来容易,做起来难啊。 希望能够用随机搜索的方法,对一个特定的3-SAT公式,找到一些该公式不可能满足的证据。如果能够对约束密度为常数的随机3-SAT公式有好的效果,那将很有趣。 2011年5月5日,赫尔辛基 终于将第一篇论文贴到arXiv上去了: “ Witness of unsatisfiability for a random 3-satisfiability formula”, Lu-Lu Wu, Hai-Jun Zhou, Mikko Alava, Erik Aurell and Pekka Orponen (arXiv:1303.2413, 2013) http://arxiv.org/abs/1303.2413 ...2013年3月11日,香港
个人分类: 有所思|3740 次阅读|4 个评论
计算机科学界的大新闻(101031)
热度 1 ymin 2010-10-31 16:31
计算机科学界的大新闻(101031) 闵应骅 绝对是计算机科学界的大新闻,PNP已经证明了。 惠普实验室的研究员Vinay Deolalikar(出生于1971年,孟买的印度理工学院的学士和硕士,1999年获美国南加州大学(USC)博士学位)声称已经证明PNP,并在网上公开了论文草稿。他在8月6日私下将100来页的论文草稿发给了相关研究领域的若干主要研究者审查。佐治亚理工学院Richard Lipton教授召集成立了一个由国际顶级专家们组成的group对Deolalikar的工作进行审查。该group包括一位数学界华裔传奇人物陶哲轩。他24岁被美国加州大学洛杉矶分校(UCLA)聘为正教授、2006年31岁即获得数学最高荣誉菲尔茨奖。Richard Lipton在8月15日发表一篇博客,说这问题已有重大进展,可以分裂为若干子问题。8月16日纽约时报也发表文章,说该问题的主要障碍已经突破。 PNP是一个计算复杂性的问题。能够在多项式的时间里找到解的问题称为P问题,在多项式的时间里只能模拟验证,而不能确定在多项式时间内找到解的问题称为NP问题。长期以来,计算科学界证明不了这两类问题是否重合,还是排斥。P vs NP问题成为克雷数学研究所七个千禧年大奖难题之一。 当然,从工程的角度说,P问题不见得不复杂;NP问题也不见得就那么复杂、不可解。例如计算(10n)的1000次方是一个P问题,但是,n较大时,多么大的计算机也是无法计算的。另一方面,布尔可满足性(SAT)问题(参见 http://www.sciencenet.cn/m/user_content.aspx?id=255116 )是一个典型的NP问题,但现在工业界的SAT Solver已经可以处理上百万变量的布尔表达式。但这个问题的理论意义是无容置疑的。我们国家也有人涉足过这个问题,也有人给出过比较长篇的证明。但是,送给某些国外权威学者,他们基本上不看。一个是找证明中的错误是非常费劲的事情,人家不愿意干。另一个原因是根据证明者的背景和基础,就不像能证明如此重大问题的人。的确,牵涉这样深刻的问题,没有很强的数学基础是不可能解决的。你甚至不知道从何下手。就像有人连图灵机理论都不懂,就想推翻图灵机理论,有人讽刺说:真是无知者无畏。研究基础对于解决重大科学问题是绝对必要的。
个人分类: 计算机|4527 次阅读|11 个评论

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-5-20 18:14

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部