-
-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Remove Top-Level Domain requirement #44
Comments
Hey, So basically, we need to allow blocking using keywords instead of urls only? 🤔 |
That would be my preference. Site blocking is fine, but I can spend hours chasing down images or videos on google or bing, but I don't want to block those sites themselves. |
Yeah i get what you mean, still i'm wondering about the fact that when using keywords we should inspect the website content as well & not only the url(s), & going through the whole opened tabs content to block some specific parts might be a bit tricky & may involve some performance pitfalls. Anyway, if you have any specific suggestions on how to implement that it would be helpful, otherwise i'll try to study it when i have some time to. |
Mmm. I'm not sure about content inspection. If I get a random targeted ad for NFL.com, I don't want my research page blocked. And I imagine managing the Whitelist vs Blacklist collisions in that space would get messy... I think URL validation against a set of keywords would be sufficient. And maybe leave Whitelist as URLs only for easier management and to avoid collisions. If it's easier, perhaps optionally allow Regex instead of a URI? 😜 |
actually, regex(s) are already supported, it's just that the allowed regex format is a bit restricted (you can check that here & here as well). So i think that if you try something like Other variants could be:
|
This still requires a NFL domain name, but doesn't prevent searches unless I block Google. |
Well i agree that seeking for a keyword in the whole url is more flexible & easy from a UX view 🤔, still it can't be really described as a parent control, since we do not inspect the content of the pages & on some websites the search query is not provided in the url, so it won't work for such cases. All this said, i think that i'm going to add a keywords list feature, 'cause i don't wanna mess with the actual blacklist/whitelist system (since for each entered URL we try to get the website favicon & that won't work with keywords, & i honestly don't like to merge separate features). Thanks for all your suggestions by the way & stay tuned! i may work on this feature this weekend. Edit: if you have any other suggestions, feel free to share your thoughts. |
@OttScott what do you think of the following (from a UX perspective): |
@AXeL-dev, I Love it! |
I think that having keywords matching on both whitelist & blacklist provides more flexibility (even if that feature may not be so useful on the whitelist).
It's a good idea actually, & i struggled a little bit to find a good way to allow both text-only matching & regex matching, so finally i came out with this simple idea:
WDYT @OttScott ? |
Why not just try both? If Regex match of the string fails to provide any results, then run a full-text search (tolower of course). That way you don't have any funny documentation about how regex needs to be supplied. Just takes two passes per keyword. |
In this case, the user won't be able to set the regex flags, unless we add a second input to specify the flags (& from a UX view, i don't like that). Another solution would be to add a switch next to the keyword input with 2 choices (string or regex), still i really don't wanna go through all these complications 'cause then i'll need to change the keywords data structure & this would require more work (i'm lazy by nature 😁).
Well i think that a simple documentation like the following is quite enough: Otherwise i think that the regex format i used is widely known (it's the same as javascript regex format). Another point would be that i don't want people to make abuse of regex patterns in keywords, 'cause you can really match by anything like you can even enter a url regex & it would work, which makes the urls list kinda useless, that's why i kinda prefer to keep regexes in keywords as a side/additional feature & not as the main one. |
Deployed in version |
Current version requires URL for Blacklist/Whitelist.
This Feature Request seeks to remove the requirement for Top-Level domains (.com) but instead parse the entirety of the URL. This would allow specific words to be blocked instead of just sites, thus allowing users more flexibility in how this tool is used.
So if someone found ballerinas distracting, they could just add 'ballerinas' or even 'balle*' to their blacklist. Or Football* or NFL* or yahoo* (which would include yahoosports.com).
I'm sure you see the benefit of such a flexible approach. Thanks for consideration.
The text was updated successfully, but these errors were encountered: