用双字棋盘的英文棋盘示例讲解语言点、知识点和原创点,即:示范如何用语言棋盘自动提取语言点,进而再以列菜单的方式 找出知识点,接着,分析知识棋谱,然后,再从中进一步寻找原创点,可采用软件截图的方式固定其特色、风格和主题等原创棋魂。-邹晓辉Geneculture 提示:这是在 用双字棋盘的中文棋盘示例 讲解语言点、知识点和原创点非常成功的基础之上,进一步由语文课(中文示例)拓展延伸至英语课(英文示例)的第一个专为青少年学生及其家长们做的示范课(in pku)。 e.g. for student by Xiaohui Zou Geneculture Exploring our world: A famous explorer, Sir Ernest Shackleton wanted to cross Antarctica. In 1914 he started the expedition but ice closed round the ship. They took smaller boats and made a camp on the snow. They lost their ship when it went down under the ice and water. They couldn't move because the weather was terrible. They caught fish and drank water which they got from the snow. Later, they had to eat their dogs. Shackleton and some of his men climped over mountains of ice, found help and went back for the other men. Everybody came home two years after the start of their expedition. They didn't cross Antarctica. http://kben.koderx.com/article/99/board
矩阵与度量及其指标可帮助简化问题及其分析和计算。 双字棋盘(孪生图灵机或双胞胎矩阵)十分巧妙地把图灵测试与塞尔的中文房间及其被邹晓辉间接形式化的中文字屋联系到一起来了,由此就引出了知识棋谱即精神食粮摄入者的菜单 附录: Measuring the Progress of AI Research This pilot project collects problems and metrics/datasets from the AI research literature, and tracks progress on them. You can use this Notebook to see how things are progressing in specific subfields or AI/ML as a whole, as a place to report new results you've obtained, as a place to look for problems that might benefit from having new datasets/metrics designed for them, or as a source to build on for data science projects. At EFF, we're ultimately most interested in how this data can influence our understanding of the likely implications of AI. To begin with, we're focused on gathering it . Original authors: Peter Eckersley and Yomna Nasser at EFF . Contact: ai-metrics@eff.org . With contributions from: Gennie Gebhart and Owain Evans Inspired by and merging data from: Rodrigo Benenson's Who is the Best at X / Are we there yet? collating machine vision datasets progress Jack Clark and Miles Brundage's collection of AI progress measurements Sarah Constantin's Performance Trends in AI Katja Grace's Algorithmic Progress in Six Domains The Swedish Computer Chess Association's History of Computer Chess performance Qi Wu et al. 's Visual Question Answering: A survey of Methods and Datasets Eric Yuan's Comparison of Machine Reading Comprehension Datasets Thanks to many others for valuable conversations, suggestions and corrections, including: Dario Amodei, Miles Brundage, Breandan Considine, Owen Cotton-Barrett, Eric Drexler, Ottavio Good, Katja Grace, Anselm Levskaya, Clare Lyle, Toby Ord, Michael Page, Anders Sandberg, Daisy Stanton, Gabriel Synnaeve, Stacey Svetlichnaya, Helen Toner, and Jason Weston. EFF's work on this project has been supported by the Open Philanthropy Project . Table of Contents ¶ Taxonomy Source code for defining and importing data Problems, Metrics and Datasets Game Playing Abstract Strategy Games Real-time Video Games Vision and image modelling Image recognition Visual Question Answering Video recognition Generating images Written Language Reading Comprehension Language Modelling Conversation Translation Spoken Language Speech recognition Scientific and Technical Capabilities Solving constrained, well-specified technical problems Reading technical papers Solving real-world technical problems Generating computer programs from specifications Learning to Learn Better Generalization Transfer Learning One-shot Learning Safety and Security Adversarial Examples and Manipulation of Classifiers Safety for Reinforcement Learning Agents Automated Hacking Systems Pedestrian Detection for self-driving vehicles Transparency, Explainability Interpretability Fairness and Debiasing Privacy Problems Taxonomy and recorded progress to date Breakdown of Problems and Metrics by Type/Category How to contribute to this project Notes on importing data Exporting / building on this data License Taxonomy It collates data with the following structure: problem \ \ \ metrics - measures \ - subproblems \ metrics \ measure s Problems describe the ability to learn an important category of task. Metrics should ideally be formulated in the form software is able to learn to do X given training data of type Y. In some cases X is the interesting part, but sometimes also Y. Measurements are the score that a specific instance of a specific algorithm was able to get on a Metric. problems are tagged with attributes: eg, vision, abstract-games, language, world-modelling, safety Some of these are about performance relative to humans (which is of course a very arbitrary standard, but one we're familiar with) agi -- most capable humans can do this, so AGIs can do this (note it's conceivable that an agent might pass the Turing test before all of these are won) super -- the very best humans can do this, or human organisations can do this verysuper -- neither humans nor human orgs can presently do this problems can have subproblems, including simpler cases and preconditions for solving the problem in general a metric is one way of measuring progress on a problem, commonly associated with a test dataset. There will often be several metrics for a given problem, but in some cases we'll start out with zero metrics and will need to start proposing some... a measure is a score on a given metric, by a particular codebase/team/project, at a particular time The present state of the actual taxonomy is at the bottom of this notebook . https://www.eff.org/files/AI-progress-metrics.html