REVIEWS. — WITH MY COMMENTS AND THOUGHTS. ... [ Word count: 5.400 ~ 22 PAGES | Revised: 2018.8.19 ]

in #science6 years ago (edited)


 

Text reviews. Books. Papers. Fiction. Nonfiction. Comments. Thoughts. — Let's have some fun.

 

      Word count: 5.400 ~ 22 PAGES   |   Revised: 2018.8.19

 

— 〈  1  〉—

REVIEWS MARKS

 
The nonrepeating letters in the review marks are mostly arbitrary. Rather they're only such that many typos must be made in order to accidentally produce a transition from an intended review mark to another. — Which makes it far less likely. — Less frequent.

bp  >   ix  >  gd  >  su  >   er  >  pt
 ⇊       ⇊       ⇊        ⇊        ⇊        ⇊
  3   >   2   >   1   >   0   >  –1   >  –2

Only a –2 is properly a bad review. Each –1 review is really a neutral review. Rather time reading has a cost: — therefore neutral reviews are negatives. Time reading is budgeted; this cost — the next best opportunity foregone — are the other things not read only because these things were read. — So everything 0, 1, 2, 3 is basically recommended.

 

standardizedreferencesBANNER.jpg

 
— 〈  2  〉—

REVIEWS — ADDED: 15

\NONFICTION: \section{A}: 3

 
bp   [AND62]   Alex ANDREW, Learning in a nondigital environment, Aspects of the theory of artificial intelligence, New York: Plenum, 1962.

gd   [AND09]   ↑↑↑, A missing link in cybernetics: logic and continuity, New York: Springer, 2009.

bp   [ASH62]   Ross ASHBY, The self reproducing system, Aspects of the theory of artificial intelligence, New York: Plenum, 1962.

\NONFICTION: \section{B}: 2

 
bp   [BOY90]   Pascal BOYER, Tradition as truth and communication, Cambridge: University Press, 1990.

bp   [BRA84]   Valentino BRAITENBERG, Vehicles, Cambridge: Massachusetts Institute of Technology Press, 1984.

\NONFICTION: \section{C}: 3

 
er   [CAR45]   Rudolf CARNAP, The two concepts of probability, Philosophy and phenomenological research, 5(4):513–532, 6.1945.

er   [CAR50]   ↑↑↑, Logical foundations of probability, Chicago: University Press, 1950.

gd   [CAR52]   Rudolf CARNAP, Yehoshua BAR-HILLEL, An outline of a theory of semantic information, Cambridge: Massachusetts Institute of Technology Research Laboratory of Electronics, 1952.

\NONFICTION: \section{F}: 1

 
bp   [FOE62]   Heinz FOERSTER, Circuitry of clues to platonic ideation, Aspects of the theory of artificial intelligence, New York: Plenum, 1962.

\NONFICTION: \section{J}: 1

 
su   [JEU96]   Johan JEURING, Patrik JANSSON, Polytypic programming, Advanced functional programming, Berlin: Springer, 1996.

\NONFICTION: \section{N}: 1

 
bp   [NOR72]   Donald NORMAN, Peter LINDSAY, Human information processing, New York: Academic Press, 1972.

\NONFICTION: \section{R}: 1

 
pt   [RAW71]   John RAWLS, A theory of justice, Cambridge: Harvard University Press, 1971.

\NONFICTION: \section{S}: 3

 
gd   [SCOT02]   Alwyn SCOTT, Neuroscience, New York: Springer, 2002.

gd   [SCOT07]   ↑↑↑, The nonlinear universe, Berlin: Springer, 2007.

pt   [SEN09]   Amartya SEN, The idea of justice, Cambridge: Harvard University Press, 2009.

 

standardizedreferencesBANNER.jpg

 
— 〈  3  〉—

UNORGANIZED COMMENTS AND THOUGHTS

 

\section{COMMENT # 3}

Very rarely do I bother with bad reviews. I do try not to read things I anticipate are so much rubbish. In the words of Schopenhauer: we don't have the time to read all the good books, so why should we some read bad ones except by accident?

You may observe that things which I do not recommend in some way are scarce in this blog. I try to read, and recommend, what is actually worth reading.

Now see the two bad reviews.

So regarding those books, I'm rather disappointed. — Expected more. — Considering how long they were. The amount of hand waving in them ... was ... significant. — Too much, I suggest. — And meanwhile the signal to noise ratio was ... very low ... for both.

Furthermore both authors clearly try to run around the reader with broken reasoning. — Even with broken heuristics — which don't even have to be reasoning. (Did they hope the reader won't notice that sort of thing?)

Broken heuristics, considering the standards for a useful heuristic are very low [BOD06, WAT85], are often proposed by authors quite intentionally. — And so that is what I suspect.

They sneak in assumptions not significantly different from their theses. — Rather than arriving at their points — where there are any points — from significant general observations or assumptions that do not contain the point if properly separated or from assumption separated from the conclusion by any gulf of substantial computational complexity. Linguistic tricks are plenty, not in short supply. — And meanwhile they don't really follow anything to its conclusions. — Too hand wavy, distracting the reader, while the other hand brings in an unrelated conclusion and lo! as if there it always was. (Something about that subject matter has mostly broken attracted reasoning. Why?)

\section{COMMENT # 2}

Subject for another post.

"Noaccount": accounts without accounts.

Brainstorming about no account creation necessary social media. Just post or not.

Focus on minimum, optimized, with a smooth sailing average user experience.

For modularity we can make small, smart sockets that can bind properly.

I was rereading Vehicles by Braitenberg, and thinking about simplicity. Why not just a very simple DPoS model with smart sockets. Referendum voting for change. Every 100 upvotes for any post you made gives you one vote. As does holding 1 token give you one vote. Condorcet preference list voting. Votes accumulated at end of an interval, must be quorum to begin the interval. For every 100 upvotes that accounts have themselves received they have one of their votes counted towards 100 for another account.

And here's the important thing: they get to submit, as a list, in a special kind of post, the order in which some accounts they've upvoted get one of their own upvotes counted towards 100 for those accounts to get an extra vote.
Now what is desirable is to have accounts ... without having accounts ...

That can be done with a new user posting a private key for posts that they want "counted" towards ad hoc account, if they get upvoted.

Just a field in any post, where if users want to "add" to an account, they just make a key and post it, to tag their post. Or just leave the field empty. Posting does not itself involve logging in, and no memory problems either for calls to any account history for a lists of subscribed and subscribers. Rather users who want to follow somebody just make a tiny post with a hash of the public key of whomever they want to follow, and add their own key. They also (probably with assistance of a bot that helps with look up), do something like paste the previous account they subscribed with.

Ordinary search in the front end, plus some display processing, can quickly follow a gradient and show a list of subscribed to and subscribers. No calls ever occur however with 10, 100, 1000, etc, accounts at once, to be served in order of received. Furthermore bots can produce database posts periodically, and these can be processed, to speed things up, in other words, ordinary search optimizations.

So basically if making a chain, that means missed blocks are possible. The way to make a robust system would be to have more than one observer bot for every other bot. So if we go for an authentication bot, it'll be also decentralized. Several bots, message passing.

Yes, I think such bots will be needed.

The basic functionality we'll need is something like the following: We have mailboxes that can mark receive messages, and depending on from whom they receive the message, they mark it (no further modification), or not, and may forward it, or hold on to it.

They should be able to forward a message, which they mark or not, randomly.

This first of all helps with security. It also simplifies logic, and message passing.

For example: M 1 receives a message from X. It marks it, and randomly passes it on, to M 42. This marks it, and randomly passes it on to M 23, ...

Meanwhile Y randomly reads the mailbox M 64. Never checks any other box. And if it finds a message there, it checks if it has at least, say, 7 marks, and if yes, processes it, if not, tells M 64 to mark it and pass it onward.

Some mailboxes can get signatures in messages, and basically sign messages. Other messages can be of the sort: if received from A, do B, send to C. Or adjust probabilities of sending to various other mailboxes.

But one key feature is we can implement rationing. Smooth load. And give temporary specialization to mailboxes.

Also nothing breaks, strictly, if we remove a few pieces of the system at any point, for whatever reason, change something. Then put it back in again.
So we can work with statistical algorithms, and load on the system is always smooth. Which is useful.

We need to implement for security some kind of actor model.

With randomness, so no special point of attack for a bad actor. Whether some organized entity outside the system, or a system user.

Meanwhile, we can use this to get an average speed of response to calls.

Nothing will be much slower. Nothing will be much faster.

Simplifies design.

We can break up a complicated algorithm into steps that get performed randomly, but sequentially if needed, and on average, complete, if they can be completed, no later than X seconds after they were requested. But the order in which they came in is now completely irrelevant. It also makes it hard for any adversary to game the system. (Compare with some existing chains.)

By the way, because any step is a bite sized piece of code, we can prove safety and liveness features about it. Maybe implement an automatic checker. This can help with authentication.

Order doesn't matter, so what we may want or need to prove in any case is limited to these small pieces taken one by one, rather deal with a growing complexity as features and bots are added to the system and interact.

Basically, we'd have a system that won't give a bot that no longer safely composes with the rest, for some reason, say because it was maliciously or erroneously modified, enough "marks" for it's API calls to the chain to do much, if anything.

Because each mailbox can do a check of the following type: Let's say B is the bot taken as data, a function. And t is a test input. f is a test operation. f (B (t) ) is invalid. So it doesn't mark it. And passes it on.

Periodically timestamped.

Meanwhile critical mailboxes, say ones that allow it to post to the chain, or compose with other bots, to display something, requires N marks, of any kind, to be collected < 90 seconds.

The bot fails to collect enough marks to do anything. And is discarded.

Sometimes a valid bot will fail, statistically. But if it tries again, because it will pass all the tests, it has very high probability to collect needed marks.

After several "retries" we can distinguish valid calls from invalid (malicious or just broken) code.
(With very high probability.)

Much of what happens will be based solely on statistics of counts, after feeding in critical actors into the swarm.
All we'd have to build is the individual actors. Which are generally very simple, specialized functions.
For scaling, in principle, each one can expand into all the resources on a single server, if that is ever needed. Nothing changes.

Which means if we need more RAM, we can distribute the system over multiple servers. Don't need each server to get more RAM. Very low overhead. Which means users can add their own nodes. Meanwhile if they change something inappropriately, their node just fails to do anything. And we can also statistically detect which node is useless, and therefore something is wrong with it.

For example, if function G needs 16 GB RAM up from 8 GB, we can drop in a copy of G into the swarm on another 8 GB machine. Same as giving 16 GB to single G. The reason being that the only cause for needing more RAM would be more messages passing through the system at that point.

And change to the whole system would be negligible: only statistics really matter.

I propose we build on top of that kind of framework.

The real nice thing about having an "average" time, everything being random, order not mattering too much, is that we can use SSD instead of RAM for some processes without issues.

Most AI is statistical. It's actually an open problem how to prove safety of statistical algorithms. (For self driving vehicles, for example.)

Consider an inventory of bots.

User can drag and drop the bots into a post.

For example, a "smart" tagger.

We'll probably have to build such tools. As we'll want to automate the process.

Tools would themselves be a bot. Periodically checking other bots.

If we can break it into very small pieces that compose in a group, it becomes easy. And why? Because such pieces are atomic in nature.

Smart sockets are one way of doing it.

Bots observe each other in graphs, one or more for each one. Multiple processes observing any single process, in the same swarm.

If one crashes, no problem. And a timestamped message gets passed around that it crashed.

Eventually it comes to the right system. Which spawns a new process of the correct type.

The key is to stick to very narrow function pieces, and treat bots as ordinary maps. This can be processed, tested, very easily by another bot.

And mere composition can be a test.

This produces a hash.

Which leads to distributions of hashes.

There is some discussion above regarding some kinds of bots we may want to start with. I suggest the key is convenience.

For example, consider a bot that takes photos named with a tag, and creates an "instagram" type post automatically. Based on current statistics of what is most popular.

The user selects a folder, selects a image host, and the bot does the rest. Interface is drag and drop.

Now imagine another bot. This performs some action with the new page produced by the first bot. Say informs, in a short comment, a random sample of the followers of the author, with a summary of the photos. (Based on the tags.)

From the perspective of the user however, it was just simple: (1) Select folder and host, (2) Drag onto the post field bot A from the inventory. Drop it onto the icon for the folder. (3) Drag onto the post field bot B from the inventory. Drop it onto the icon for bot A.

The primary issue of cryptos seems to be mainstreaming. For the mainstream social media crowd.

Drag and drop composing. (But at a different level of granularity. Exist some projects using blocks based programming; the suggestion instead is we're doing blocks based composing of AI.)

In our case, the AI composes because, in the end, there is just message passing and buffers.

When f g is valid but g f is invalid, this is controlled by marks.

g f will almost never happen, because f needs too many marks to get composed in that manner with g.

But on the other hand, f g requires few marks. So we have control over sequence when needed, but statistical. And generally just ignore it, when it does not, or should not matter.

There can be thousands of operations occurring, but we can just ignore pieces floating around that are not relevant for some operation. (The rest won't get composed in between the ones that are supposed to follow one another, except by some fluke, in which case that will crash or be passed around too long and rejected, and retried.)

If the average time for things to settle is 30 seconds, we can just treat

a b c d e f g
as
b ( a ( e ) )
or
d
or
g ( c ( f ) ) )

depending on what operations we trying to implement, by setting marking requirements, for example.
In the end all will almost always happen like that.

After about 30 seconds.

One thing happening in some order will not affect some other things happening in order. We just need to design g, c, f, if interested in g (c (f ) ).

Can ignore the rest meanwhile.

The nice thing about the indeterminism from the random message passing is that we can hook into it whenever we need randomization. For free.

It's done anyway. May as well hook into it. So randomness is very inexpensive.

With proper NLP, random seeds can make bot comments interesting.

Since the randomness is already present, not an additional process that we need to wait for, we can make use of it as much as we need.

And the important thing is that the randomness is not completely arbitrary, but related to what's going on in the network.

To make best use of that, we can put in a few thresholds.

For example, if usage in the last minute was low, we get random integers 4, 7, 2, 9, 13, etc, .... but if there were 50.000 users at once ... we'd get random integers 200, 123, 452, 123, etc, ... that could be useful.

(It's also part of the toolkit how we can approach security. Safety properties.)

It can trigger a process that creates mailboxes or removes mailboxes it created from the swarm. And create feedback that affects average time. If necessary. (Like nonsynaptic neurotransmission, something like caffeine, can increase all the firing threholds for neurons in the brain at once. For example, to maintain some state.)

Many good proxies of average time for some process to complete or reach a point such that, if continued, it's probably not going to complete.

Neural nets that learn from hashes of data, never see data itself, and still perform well, would be significant.

The mathematics is also worth a couple papers, I think.

I'm actually working on the dynamics for a "realistic" neuron. One which does computing in the dendrites, more than two synapse types, and synapses move around.

What really matters is proving shortcuts that work for predicting what properties the system will observe.

Ordinary nets with convolution layers can distinguish certain data better than humans, and other data cannot distinguish at all. Which humans distinguish easily.

Some images actually lack the information. Embodied cognition is a thing.
Basically, consider that a human approaches an object, interacts with it. Moves it around, see what he can do with it. Thus identifying it.

He then walks off, then turns around, and knows there, that blurry image, is [whatever he identified it as].

And can learn to distinguish.

Meanwhile unsupervised learning cannot handle this case; no body; no history.

Not enough information in the image itself.

Ordinary habit.

This issue actually depressed a lot of researchers at Stanford. As they become convinced that until robots becomes advanced enough, and humanoid robots exist, AI is very limited. Not really the case, but that's what many thought.

CAPTCHAs work for this reason, not just computational complexity.

(SAT meanwhile is not strictly solved in general, but plenty of fast solvers for most cases. Which means Peter Kugel or Knuth may be correct that even if consistent general solutions don't exist, maybe almost general solutions, that make a few errors or contradictions in uninteresting cases, but solve all other cases, exist.)

Fast in all important cases, wrong or slow in a few very rare cases.
That kind of solution, outside of AI, is not generally sought out. (For example, evolutionary algorithm solutions exist for just about everything, but it was recently proved they are not very efficient in general. Which is not a major surprise.)

Working on a proof logic framework. Arithmetized groups is the idea: so if we add the integers we get after testing and L = R, all is good.

For various checks, I'll try to make thing work such that we randomly check in randomly different ways, and something is correct iff the sum for each check is the same. For several random checks, regardless of what the sum happens to be. Matters that we get the same integer in the end in each case. Which means we prove invariants.

If it works, it means we get natural error codes. Differences that are also invariant and also integers.

Trying various formalizations to see what's simplest to compute and implement. (We want the system to be doing it automatically. So what I need to make is every group of behaviors to have a generic test that gives a unique integer.)

Now some thoughts about reasoning about social media platforms.

How to think about these things: in terms of lazy pointers.

Imagine a description, and all things that satisfy the description are found and the description is replaced by the unordered list of things, if any, which it describes.

That's the usual way mathematics is done. However that's not constructive. [Can't use it on a computer.] No procedure is present that finds these things, if any to be found. And there is no proof that such a procedure exists, other than trial and error, even if there's proof that the list is not empty when we expect it to be not empty.

Now instead consider a different approach. Imagine a list of number N different 「ingredients」. We can take each in any quantity. Each is simply a named quantity. - Or else they're things like procedures called, if those are numbered. Many processes can be represented. Same framework.

Each bundle of some quantities is a 「thing」. A result. We're interested in predicting the building of results. — So far as complexity does not exclude predicting results.

Suppose we looked at our complex system as a sequence of much simpler groups.

Each group is 「every thing a machine can construct from any quantity of exactly X ( ≤ N) ingredients」.

Same type. There would be a single generic procedure to test what's in each such group.

Exist continuous [or 「small」 marginal] transformation of one group into another group. One type of quantity goes to 0 by an increment, meanwhile a single other quantity that was 0 must now differ from 0. Different group.

Order of hopping from group to group doesn't matter clearly. [Good. If we're going to have machines automatically test anything, we just eliminated one source of complexity by living in that framework.]

In our case, we're not going to build all the results in each group. Just a random sample.

Consider a list of ingredients for muffins in terms of separate piles of all the muffins you can make from exactly three ingredients from the list. Any quantity of each ingredient is allowed. But the same ingredients. One important thing is to stick to results that can be built indirectly from some two or more other results in the same group. (So you can make a big muffin from two small muffins of the same three ingredients in different quantities. Just mush together.)

For each N, exist continuous transformations of the Nth sequence of such groups and transformations into the Mth sequence. — If N - 1 = M or N + 1 = M. — Because exist continuous transformations, at the same point of transition as before, of any group in the Nth sequence into a group in the Mth sequence.

The goal is to break down the combinatorics and make it both comprehensible and manageable. (By the same process we also just made every combinatoric aspect of whatever we're doing constructive and concurrent.)

And if we need to prove things, we have well defined, delimited groups, relatively small, we can point to lazily. We can always make a generic terminating procedure, worst case a different one for each group of the type above, at some level of granularity, that makes each group.

Sampling means then working at coarser granularity and not using a very refined generic routines that construct the groups.

The desired goal is small list of methods, generators, that in some order, we can be sure will generate a generic test for any group that gives it a unique value. — Some arbitrary distinguishable constant integer. — And no combination ever makes a test that generates the same value for different groups.

We want to be able to reason about types of events. Without having to discuss or model or delve into the combinatorics.

So groups become objects in category theory. (Don't focus too much however on popular things like comonads. For example, that's basically nonsequential undo functionality. That's not going to solve any important problems.)

Objects just means we don't have to look inside. And can combine in any order.

Basically one approach has us hashing results while they're still in process of being built. And without looking exactly at what they are. We know the boundaries of what can happen. But we don't know, of course, what will happen. Rather we delimit and hash and reason in terms of those boundaries — the transition points between groups.

That's plain language gist of the approach at the moment. Write up moving along. Basically we need an API that can lazily point to things that users can do, we can do, classify them, and if needed, start building pieces generically by just randomly cycling through a very small list of primitive methods. If AB doesn't work, and BCD doesn't work, next attempt, AD will work with high probability. [Meanwhile we know what to expect and can reason about it.] What is desirable is to have the intelligence come from randomness where soon enough some random combination of methods will do whatever is needed, and meanwhile predictable bounds exist on the when exactly and the what exactly in this sense.

All invariants should be the same regardless of how they are arrived at.

The goal would be that we can predict the bounds of what will happen. — And see desired actions in any combination complete, within these bounds, with very high probability during a specific interval. — Starting count as soon as some action is performed.

So for example, some complex processing task will very rarely take > 10 seconds, if we design for that, but will often take much less. And even if we don't know what users will do, we know exactly the possibilities, and the joint possibilities. (A ⊗ B in the a ⊗ b ∊ A ⊗ B, for user actions a ∊ A, b ∊ B.) Other things known statistically.

We may basically avoid exponentials. Partly by using random algorithms. And partly by breaking into sets of things are are unconnected and can be isolated, and decisions about one sample can help decide decisions about other samples. It does introduce mathematical constraints ... to allow things to be broken up as above and avoid huge combinatorics in any interval as problems get large.

They key is to have at least one of the first few guess be "good enough", meanwhile split any processing work into quickly constructed chunks that are independent, so the irrelevant are known to be irrelevant and don't have to be looked inside, and being independent can be ignored, meanwhile boundaries of behavior spaces are predictable, including for joint combinations, and define gradients of small atomic pieces to follow, one after another, until a problem has some kind of output, on average.

In finance, large calculations for derivatives with no closed form algebraic solution are approximated by breaking into additive pieces, and mapping into a space where each part of the space is predictable how much it will contribute to the solution, even if the solution is not known and some parts are very costly to compute, and where costs > benefits, that part is ignored.

If we need speedup, we adjust the level of granularity in what constitutes a group, or what group we use for what, for example.

And the system will do this on the fly. It should not involve more than manipulating integers and calculating some test cases. (Trade off is careful initial design considerations. Chains seem to be built less carefully in general. But result is then they can't do something like this.)

For example, if some bot introduces large latency, when combined with another specific bot, we'll see exactly who type of interaction is causing this latency.

Then we have options.

(a) We can do like in finance and just block or ignore that group specifically. Because we can isolate it.

(b) We can tradeoff some predictability about outcome for maintaining consensus and speedup. Result will be more random; but randomness well done provides a lot of capabilities in speed, provided we load up the system with AI methods to try in some order, possibly concurrently, which will all basically rely on randomness to get their work done.

\section{COMMENT # 1}

Was thinking how to mix things like STEEM and IPFS in an useful but relatively easy way.

Familiar with Renpy? (https://www.renpy.org/ ... Meant for visual novels ... actually best to use it for occasional fancy animated presentations instead.)

Now suppose that the latest version was dumped on a server.

Users make a post on Steemit that is a script for Renpy.

Images are simply linked at the very bottom of the post.

We have a link on the first line of the post that brings the user to a front end with a single field. Then the user enters a link to the post and the plain text is scraped, treated as a script for Renpy.

A presentation executable file, with all the animation capabilities of a game engine, is generated for download to the user .... run on every OS basically. IPFS is used to drop files and host the presentation for a short while.

Is there any other interesting low hanging fruit that could demo an important use case ... ? Presentations with animations compiled on the cloud and over social media are easy to stir up some interest regarding — what do you think? 「On-the-cloud anti-powerpoint」 ?

ABOUT ME

I'm a scientist who writes fantasy and science fiction under various names.

                         ◕ ‿‿ ◕ つ

      #writing   #creativity   #science   #fiction   #novel   #scifi   #publishing   #blog
            ♥ @tribesteemup  @thealliance  #isleofwrite   #nobidbot  @smg
                      #technology   #cryptocurrency   #history   #philosophy
                           #realscience   #development   #future   #life

 

UPVOTE !     FOLLOW !

 
|   SCIENCE FICTION & FANTASY   |   TOOLS & TECHNOLOGY   |
|   PRACTICAL THINKING — LATESTRECENT POPULAR   |

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License  . . .   . . .   . . .    Text and images: ©tibra. @communicate on minds.com

Sort:  

Somewhere at the very top of the text above I put a tag: — Revised: Date.

I'll often, later, significantly enlarge the text which I wrote.

Leave comments below, with suggestions.
              Maybe points to discuss. — As time permits.

Finished reading? Really? Well, then, come back at a later time.

Guess what? Meanwhile the length may've doubled . . . ¯\ _ (ツ) _ /¯ . . .


2018.8.12 — POSTED — WORDS: 1.250
2018.8.13 — ADDED — WORDS: 100
2018.8.19 — ADDED — WORDS: 4.100
 

Congratulations @tibra! You have completed the following achievement on Steemit and have been rewarded with new badge(s) :

Award for the number of upvotes

Click on the badge to view your Board of Honor.
If you no longer want to receive notifications, reply to this comment with the word STOP

Do you like SteemitBoard's project? Then Vote for its witness and get one more award!

:thumbsup:

Do you know the best way to promote your website to social media users? I will share it with you
https://www.urlpromoter.com/Page/Index/EXnzcliica10eAv

Coin Marketplace

STEEM 0.29
TRX 0.12
JST 0.033
BTC 62934.09
ETH 3118.65
USDT 1.00
SBD 3.85