ztqakita's Blog
    • Posts
    • Introduction
    • Algorithms
      • Complexity & Divide and Conquer
      • Dynamic Programming
      • Greedy & Back-track & Branch and Bound
    • Compiler
      • Lexcial Analysis & Parsing
      • Semantic Analysis & Runtime Environment
      • Syntax-directed Translation
    • Computational Neuroscience
      • Ionic Currents
      • Neuroscience Basic Knowledge
    • Database System
      • Database System Lecture Note 1
      • Database System Lecture Note 2
      • Database System Lecture Note 3
      • Database System Lecture Note 4
    • DL
      • Convolutional Neural Network
      • Introduction to Deep Learning
      • Optimization for Deep Learning
      • Recursive Neural Network
      • Self-attention
      • Transformer
    • Life Learning
      • Architectures of neuronal circuits
      • how to model
      • Lecture James McClleland
      • Lecture Yao Xin
    • ML
      • Basic Concepts
      • Classification
      • Decision Tree
      • KNN
      • Perceptron
      • SOM
      • Support Vector Machines
    • Operating System
      • CPU Scheduling
      • File System
      • Introduction & OS Structure
      • Mass-Storage Structure & I/O System
      • Memory Management
      • Process & Threads
      • Process Synchronization
    • Paper Reading
      • Continuous-attractor Neural Network
      • Few-Shot Class-Incremental Learning
      • Integrated understanding system
      • Push-pull feedback
      • reservoir decision making network
      • Task representations in neural networks
    Complexity & Divide and Conquer

    Chapter 1 I. 算法及算法复杂度 1. Definition Input Output Definiteness Finiteness Effectiveness note: program vs algorithm program: A program is written in some programming language, and does not have to be finite. algorithm: An algorithm can be described by human languages, flow charts, some programming languages, or pseudo-code. 2. 算法的评价 正确性 健壮性 复杂性 时间复杂度 空间复杂度 可读性 简单性 II. 算法复杂度分析 1. 指标 平均时间复杂度 最坏时间复杂度 2.

    September 21, 2020 Read
    Introduction & OS Structure

    Chapter 1: Introduction 1.1 OS Definition A program that acts as an intermediary between a user of a computer and the computer hardware. Computer System Structure 1.1.1 User View ease of use Do not care about resource utilization! 1.1.2 System View resourse allocator control program 1.2 Computer-System Organization bootstrap program: Typically stored in ROM or EPROM, generally known as firmware. Initializes all aspects of system. Loads operating system kernel and starts execution.

    September 21, 2020 Read
    Database System Lecture Note 1

    Chapter 1 Introduction 1.1 DB, DBMS, DBS, DBAS DB: a collection of interrelated data stored in systems as files DBMS: a set of programs to access the data in DB DBS: users + DBMS + DB 1.2 View of Data 1.2.1 Levels of Data Abstraction View Level: how the data items in DB are used by different users Logical Level: e.

    September 15, 2020 Read
    Convolutional Neural Network

    I. CNN Structure Overview II. Convolution Note: 1.Every elements in filter are the network parameter to be learned. 2.Stride means each step you walk from previous position. 3.The size of filter is decided by programmer. From the picture we could know the largest values of Feature Map means there has a feature. Then we do the same process for every filter and generate more Feature Map. If we deal with colorful image, we will use Filter Cube instead of matrix.

    September 14, 2020 Read
    Introduction to Deep Learning

    I. Basic Concepts 1. Fully Connected Feedforward Network 2. Matrix Operation Every layer has weight matrix and bias matrix, using matrix operation we can accumulate the output matrix $y$. Tips: Using GPU could speed up matrix operation. II. Why Deep Learning? 1. Modularization 对neural network而言,并不是神经元越多越好,通过例子可以看出层数的增加(more deep)对于准确率的提升更有效果。这其中就是 Modularization 的思想。For example, while you are trying to train the model below, you can use basic classifiers as module. Each basic classifier can have sufficient training examples.

    August 10, 2020 Read
    Classification

    Classification I. Probabilistic Generative Models 1. Detailed Process The basic idea is estimating the probabilities form training data. Let’s consider the two classes case: First of all, we need to figure out prior class probabilities $P(C_k)$. It’s pretty easy to find that $P(C_k) = \frac{SizeOf C_k}{SizeOf Training Data}$ Then the task is to find out $P(x|C_k)$. Each data is represented as a vector by its attribute, and it exist as a point in a multidimensional space.

    July 28, 2020 Read
    Optimization for Deep Learning

    Some Notation: $\theta_t$: model parameter at time step t $\nabla$$L(\theta_t)$ or $g_t$: gradient at $\theta_t$, used to compute $\theta_{t+1}$ $m_{t+1}$: momentum accumlated from time step 0 to time step t, which is used to compute $\theta_{t+1}$ I. Adaptive Learning Rates In gradient descent, we need to set the learning rate to converge properly and find the local minima. But sometimes it’s difficult to find a proper value of the learning rate.

    July 25, 2020 Read
    Introduction

    Greeting! This is an introduction post. This post tests the followings: Hero image is in the same directory as the post. This post should be at top of the sidebar. Post author should be the same as specified in author.yaml file.

    June 8, 2020 Read
    • ««
    • «
    • 1
    • 2
    • 3
    • 4
    • 5
    • »
    • »»
    Navigation
    • About
    • Skills
    • Experiences
    • Projects
    • Recent Posts
    • Achievements
    Contact me:
    • Email: ztqakita@163.com
    • Phone: (+86)18618180071

    Stay up to date with email notification

    By entering your email address, you agree to receive the newsletter of this website.

    Toha
    © 2021 Copyright.
    Powered by Hugo Logo