Merge pull request #8 from nicholasfagan/master

Fixed spelling mistakes and made clarifications.
This commit is contained in:
Christian Schabesberger 2018-12-14 09:39:43 +01:00 committed by GitHub
commit b76024159c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 33 additions and 33 deletions

View File

@ -4,11 +4,11 @@ NewPipe Tutorial
[![travis_build_state](https://api.travis-ci.org/TeamNewPipe/documentation.svg?branch=master)](https://travis-ci.org/TeamNewPipe/documentation)
This is the [tutorial](https://teamnewpipe.github.io/documentation/) for the [NewPipeExtractor](https://github.com/TeamNewPipeExtractor).
It's thought for thous who want to write their own service, or use NewPipeExtractor in their own projects.
It is for those who want to write their own service, or use NewPipeExtractor in their own projects.
This tutorial and the documentation are in an early state. So [feedback](https://github.com/TeamNewPipe/documentation/issues) is always welcome :D
The tutorial is crated using [`mkdocs`](http://www.mkdocs.org/). You can test and host it your self by running `mkdocs serve` in the root
The tutorial is created using [`mkdocs`](http://www.mkdocs.org/). You can test and host it your self by running `mkdocs serve` in the root
directory of this project. If you want to deploy your changes and you are one of the maintainers you can run `mkdocs gh-deploy && git push`.
## License

View File

@ -5,9 +5,9 @@ service with which NewPipe will gain support for a dedicated streaming service l
The whole documentation consists of this page, which explains the general concept of the NewPipeExtractor, as well as our
[Jdoc](https://teamnewpipe.github.io/NewPipeExtractor/javadoc/) setup.
__IMPORTANT!!!__ this is likely to be the worst documentation you have ever red, so do not hesitate to
__IMPORTANT!!!__ this is likely to be the worst documentation you have ever read, so do not hesitate to
[report](https://github.com/teamnewpipe/documentation/issues) if
you find any (spelling)errors, incomplete parts or you simply don't understand something. We are an open community
you find any spelling errors, incomplete parts or you simply don't understand something. We are an open community
and are open for everyone to help :)
## Setup your dev environment
@ -19,7 +19,7 @@ First and foremost you need to meet certain conditions in order to write your ow
- Basic understanding of __[git](https://try.github.io)__
- Good __[Java](https://whatpixel.com/best-java-books/)__ knowledge
- Good understanding of __[web technology](https://www.w3schools.com/)__
- Basic understanding about __[unit testing](https://www.vogella.com/tutorials/JUnit/article.html)__ and __[JUnit](https://junit.org/)__
- Basic understanding of __[unit testing](https://www.vogella.com/tutorials/JUnit/article.html)__ and __[JUnit](https://junit.org/)__
- Flawless understanding of how to [contribute](https://github.com/TeamNewPipe/NewPipe/blob/dev/.github/CONTRIBUTING.md#code-contribution) to the __NewPipe project__
### What you need to have
@ -61,11 +61,11 @@ After creating you own service you will need to submit it to our [NewPipeExtract
- Basically anything except [NOT allowed content](#not-allowed-content).
- Any kind of porn/NSFW that is allowed according to the [US Porn act](https://www.justice.gov/archive/opa/pr/2003/April/03_ag_266.htm).
- Advertisement (may be handled specially tho)
- Advertisement (may be handled specially though)
## NOT allowed Content
- NSFL
- NSFL (Not Safe For Life)
- Porn that is not allowed according to [US Porn act](https://www.justice.gov/archive/opa/pr/2003/April/03_ag_266.htm).
- Any form of violence
- Child pornography

View File

@ -5,9 +5,9 @@
Before we can start coding our own service we need to understand the basic concept of the extractor. There is a pattern
you will find all over the code. It is called the __extractor/collector__ pattern. The idea behind it is that
the [extractor](https://teamnewpipe.github.io/NewPipeExtractor/javadoc/org/schabi/newpipe/extractor/Extractor.html)
would produce single pieces of data, and the collector would take it and form usable data for the front end out of it.
The collector also controls the parsing process, and takes care about error handling. So if the extractor fails at any
point the collector will decide whether it should continue parsing or not. This requires the extractor to be made out of
would produce single pieces of data, and the collector would collect it to form usable data for the front end.
The collector also controls the parsing process, and takes care of error handling. So if the extractor fails at any
point, the collector will decide whether or not it should continue parsing. This requires the extractor to be made out of
many small methods. One method for every data field the collector wants to have. The collectors are provided by NewPipe.
You need to take care of the extractors.
@ -92,7 +92,7 @@ private MyInfoItemCollector collectInfoItemsFromElement(Element e) {
## InfoItems encapsulated in pages
When a streaming site shows a list of items it usually offers some additional information about that list, like it's title a thumbnail
When a streaming site shows a list of items it usually offers some additional information about that list, like it's title, a thumbnail,
or its creator. Such info can be called __list header__.
When a website shows a long list of items it usually does not load the whole list, but only a part of it. In order to get more items you may have to click on a next page button, or scroll down.
@ -116,7 +116,7 @@ such as:
The reason why the first page is handled special is because many Websites such as YouTube will load the first page of
items like a regular webpage, but all the others as AJAX request.
items like a regular webpage, but all the others as an AJAX request.

View File

@ -12,12 +12,12 @@ one unique ID that represents it, like this example:
- https://m.youtube.com/watch?v=oHg5SJYRHA0
### Importand notes about LinkHandler:
- A simple `LinkHandler` will contain the default URL, the ID and the original url.
- `LinkHandler` are ReadOnly
- LinkHandler are also used to determine which part of the extractor can handle a certain link.
- A simple `LinkHandler` will contain the default URL, the ID and the original URL.
- `LinkHandler`s are ReadOnly
- `LinkHandler`s are also used to determine which part of the extractor can handle a certain link.
- In order to get one you must either call
[fromUrl()](https://teamnewpipe.github.io/NewPipeExtractor/javadoc/org/schabi/newpipe/extractor/linkhandler/LinkHandlerFactory.html#fromUrl-java.lang.String-) or [fromId()](https://teamnewpipe.github.io/NewPipeExtractor/javadoc/org/schabi/newpipe/extractor/linkhandler/LinkHandlerFactory.html#fromId-java.lang.String-) of the the corresponding `LinkHandlerFactory`.
- Every type of Type of Resource has its own LinkHandlerFactory. Eg. YoutubeStreamLinkHandler, YoutubeChannelLinkHandler, etc.
- Every type of Type of Resource has its own `LinkHandlerFactory`. Eg. YoutubeStreamLinkHandler, YoutubeChannelLinkHandler, etc.
### Usage
@ -65,15 +65,15 @@ ListLinkHandler are also created by overriding the [ListLinkHandlerFactory](http
additionally to the abstract methods this factory inherits from the LinkHandlerFactory you can override
[getAvailableContentFilter()](https://teamnewpipe.github.io/NewPipeExtractor/javadoc/org/schabi/newpipe/extractor/linkhandler/ListLinkHandlerFactory.html#getAvailableContentFilter--)
and [getAvailableSortFilter()](https://teamnewpipe.github.io/NewPipeExtractor/javadoc/org/schabi/newpipe/extractor/linkhandler/ListLinkHandlerFactory.html#getAvailableSortFilter--).
Through these you can tell the front end which kind of filter your service support.
Through these you can tell the front end which kind of filter your service supports.
#### SearchQueryHandler
You cannot point to a search request with an ID like you point to a playlist or a channel, simply because one and the
same search request might have a changing outcome deepening on the country or the time you send the request. This is
same search request might have a different outcome depending on the country or the time you send the request. This is
why the idea of an "ID" is replaced by a "SearchString" in the [SearchQueryHandler](https://teamnewpipe.github.io/NewPipeExtractor/javadoc/org/schabi/newpipe/extractor/linkhandler/SearchQueryHandler.html)
These work like regular ListLinkHandler, accept that you don't have to implement the methodes `onAcceptUrl()`
These work like regular ListLinkHandler, except that you don't have to implement the methods `onAcceptUrl()`
and `getId()` when overriding [SearchQueryHandlerFactory](https://teamnewpipe.github.io/NewPipeExtractor/javadoc/org/schabi/newpipe/extractor/linkhandler/SearchQueryHandlerFactory.html).

View File

@ -10,7 +10,7 @@ before doing this.
Your service does not have to implement everything. Some parts are optional.
This is because not all services support every feature other services support. For example, it might be that a certain
service does not support channels. If so, you can leave out the implementation of channels, and make the corresponding
factory methode of the your __StreamingService__ implementation return __null__. The forntend will handle the lack of
factory method of the your __StreamingService__ implementation return __null__. The frontend will handle the lack of
having channels then.
However if you start to implement one of the optional parts of the list below, you have to implement all parts/classes
@ -34,9 +34,9 @@ which will give you a little help with that. __Use Regex with care!!!__ Avoid it
ask us to introduce a new library than start using regex to often.
- Html/XML Parsing: [jsoup](https://jsoup.org/apidocs/overview-summary.html)
- JSON Parsiong: [nanojson](https://github.com/mmastrac/nanojson#parser-example)
- JSON Parsing: [nanojson](https://github.com/mmastrac/nanojson#parser-example)
- JavaScript Parsing/Execution: [Mozilla Rhino](https://developer.mozilla.org/en-US/docs/Mozilla/Projects/Rhino/Documentation)
- Link dectection in strings: [AutoLink](https://github.com/robinst/autolink-java)
- Link detection in strings: [AutoLink](https://github.com/robinst/autolink-java)
If you need to introduce new libraries please tell us before you do it.
@ -69,8 +69,8 @@ So when adding your service just give it the ID of the previously last service i
### Stream
Streams are considered single entities of video or audio, they come along with metainformation like a title, a description,
next/related videos, thumbnail and commends. For getting the url to the actual stream data as well as this metainformation
Streams are considered single entities of video or audio. They come along with metainformation like a title, a description,
next/related videos, a thumbnail and comments. For getting the URL to the actual stream data as well as this metainformation
StreamExtractor is used. The LinkHandlerFactory will represent a link to such a stream. StreamInfoItemExtractor will
extract one item in a list of items representing such Streams, like a search result or a playlist.
Since every Streaming service (obviously) provides streams this is required to implement. Otherwise your service was
@ -105,7 +105,7 @@ __Parts required to be implemented:__
- [ListLinkHandlerFactory](https://teamnewpipe.github.io/NewPipeExtractor/javadoc/org/schabi/newpipe/extractor/linkhandler/ListLinkHandlerFactory.html)
### Channel
A Channel is mostly a [Playlist](#playlist), the only diferens is that it does not represent a simple list of streams, but a
A Channel is mostly a [Playlist](#playlist), the only difference is that it does not represent a simple list of streams, but a
user, a channel, or any entity that could be represented as a user. This is why the metadata supported by the channel extractor
differs form the one of a playlist.
@ -123,7 +123,7 @@ Your service would look pretty empty if you select it and no video is being disp
since users of NewPipe can decide by the settings weather they want to see the kiosk page or not.
#### Multiple Kiosks
Most services will implement more than one Kiosk, so a service might have a "Top 20" for different categories like "Country Music", "Techno" ets.
Most services will implement more than one Kiosk, so a service might have a "Top 20" for different categories like "Country Music", "Techno", etc.
This is why the extractor will let you implement multiple __KioskExtractors__. Since different kiosk pages might also differ
with their HTML structure every page you want to support has to be implemented as its own __KioskExtractor__.
However if the pages are similar you can use the same Implementation, but set the page type when you instantiate your __KioskExtractor__

View File

@ -23,11 +23,11 @@ sometimes have to adjust the udev rules in order to
### Run your changes on the Extractor
In order to use the extractor in our app we use [jitpack](https://jitpack.io). This is a build service that can build
marven *.jar packages for android and java based on a github or gitlab repositories.
maven *.jar packages for android and java based on a github or gitlab repositories.
To the extractor through jitpack, you need to push them to your online repository of
your copy that you host either on [github](https://github.com) or [gitlab](https://gitlab.com). It's important to host
it on one of both. Now copy your repository url in Http format, go to [jitpack](https://jitpack.io/), and past it there
it on one of both. Now copy your repository url in Http format, go to [jitpack](https://jitpack.io/), and past it there.
From here you can grab the latest commit via `GET IT` button.
I recomend not to use SNAPSHOT, since I am not sure when snapshot is build. An "implementation" string will be generated
for you. Copy this string and replace the `implementation 'com.github.TeamNewPipe:NewPipeExtractor:<commit>'` line in
@ -43,7 +43,7 @@ with the new extractor.
![image_sync_ok](img/sync_ok.png)
### Trouble shoot
### Troubleshooting
If something went wrong on jitpack site, you can check their build log, by selecting the commit you tried to build and
click on that little paper symbol next to the `GET IT` button. If it is red it already shows that the build failed.

View File

@ -3,7 +3,7 @@
<img width=150 src="https://raw.githubusercontent.com/TeamNewPipe/NewPipe/dev/assets/new_pipe_icon_5.png"/>
This side is/should be a beginner friendly tutorial and documentation for people who want to use, or write services for the [NewPipe Extractor](https://github.com/TeamNewPipe/NewPipeExtractor).
This site is/should be a beginner friendly tutorial and documentation for people who want to use, or write services for the [NewPipe Extractor](https://github.com/TeamNewPipe/NewPipeExtractor).
It is an addition to our auto generated [jdoc documentation](https://teamnewpipe.github.io/NewPipeExtractor/javadoc/).
Please be aware that it is also in an early state, so help and [feedback](https://github.com/TeamNewPipe/documentation/issues) is always welcome :D
@ -11,7 +11,7 @@ Please be aware that it is also in an early state, so help and [feedback](https:
## Introduction
The NewPipeExtractor is a Java framework for scraping video platform websites in a way that they can be accedes like a normal API. The extractor is the core of the popular YouTube and streaming App [NewPipe](https://newpipe.schabi.org) for android, however it's system independent and also available for other platforms.
The NewPipeExtractor is a Java framework for scraping video platform websites in a way that they can be accessed like a normal API. The extractor is the core of the popular YouTube and streaming App [NewPipe](https://newpipe.schabi.org) for android, however it's system independent and also available for other platforms.
The beauty behind this framework is it takes care about the extraction process, error handling etc., so you can take care about what is important: Scraping the website.
It focuses on making it possible for the creator of a scraper for a streaming service to create best outcome by least amount of written code.
The beauty behind this framework is it takes care of the extracting process, error handling etc., so you can take care about what is important: Scraping the website.
It focuses on making it possible for the creator of a scraper for a streaming service to create the best outcome with the least amount of written code.